My AI Assistant: What Really Works After Three Weeks of Real-World Use — and What Doesn’t

7 min.

Summary

I built myself an AI-powered personal assistant — not an on-demand chatbot, but a system that proactively works alongside me. After a month of real-world use, this article takes an honest look at the balance sheet: what works, what surprised me, where the limits are — and what any of this has to do with good project management.

I’m regularly asked how I organize myself. I wrote an article about that here several years ago. A lot has changed since then — and the biggest shift wasn’t a new tool, but a different approach: I built myself an AI-powered personal assistant. Not a chatbot you open when you need it. Something that runs permanently in the background and works proactively.

This article is not a technology report and not a product comparison. It’s an honest account of nearly three weeks of real-world operation — with everything that worked, everything that surprised me, and where the limits lie.

How it came about

I’ve been deeply engaged with AI for years — professionally in transformation management, on a voluntary basis at GPM, and personally as someone who is simply curious. At some point I realized I was still using AI primarily reactively: I ask a question, I get an answer, that’s it. It’s roughly like having an employee you only talk to when you physically walk to their desk — the rest of the time they just sit there.

The real question I was preoccupied with: can an AI system work proactively with me, the way a well-attuned human assistant would? One who already knows in the morning what’s on the agenda today. Who summarizes the key points after a meeting. Who kicks off a research task before I even ask.

Setting it up took several weeks of technical work — I worked with various software categories: task management, communication platforms, AI language models, automation services, and meeting transcription tools. If I had to summarize the current state: it works. Not seamlessly, but far better than I expected.

What’s actually in use

I call my assistant Max. That might sound silly, but there’s a practical reason: it helps me formulate more clearly what I expect from him. “Max, prepare the meeting” is more precise than typing “Prepare the meeting” into a prompt.

Max communicates with me exclusively via a messaging app. No interface, no dashboard. He sends me a briefing in the morning, I respond when I need something, he acts. That’s the core idea. Here’s what’s specifically in use:

Morning start: Every morning I receive a structured briefing with my tasks for today and any overdue items — grouped by project, with direct links into my task management system. That sounds mundane, but it makes a real difference. Previously I assembled this myself, which took 10–15 minutes depending on the workload. Now it happens automatically, every day, without me doing anything.

Meeting analysis: I use a transcription tool that records and structures my conversations — for example for the working group at GPM. Max has access to these transcripts and can give me the essence of a conversation in two or three sentences after the fact — or show me the open items from the last meeting before a follow-up. Both work very well, as long as the transcripts are complete. There were cases where Max claimed a session didn’t exist when it actually did — because he hadn’t proactively checked. I had to fix that through clear rules. More on that later.

Research and preparation: I connected Max to a self-hosted search system. He can use it to independently search for current topics — for daily news briefings on AI and project management, but also for specific preparation work. Before a call with a strategy partner, he summarized that person’s recent YouTube topics, listed the logical next steps from our last conversation, and derived discussion points from them. That saved me at least an hour of preparation.

Email: Max has access to a dedicated email account. But he never sends autonomously — that was a clear rule from the start. He always shows me the draft first, waits for my confirmation, and only then sends. In one of the first test emails, he claimed the email had been sent without actually having sent it. I noticed immediately and demanded the actual send command. It’s been running correctly since, but the episode shows: trust has to be earned, even with AI systems.

People research: Before meetings with new contacts, Max searches for their professional background, current positions, and areas of focus. It’s not deep research, but a solid starting point. What he couldn’t do: reliably extract personal preferences (favorite restaurant, dietary habits) from social media. He tried, found nothing usable, and communicated that transparently — which I think is the right approach.

What surprised me

I expected the technical setup to be the biggest challenge. That’s not the case. The real challenge is defining the system’s behavior clearly — in writing, in the form of rules and principles that Max always has access to.

A few insights I hadn’t anticipated:

AI systems need to learn to act proactively. I had to point out to Max multiple times that when data sources are available, he should check them himself before asking me. That sounds trivial, but it isn’t. The default response initially was: ask rather than act. It works better now because I explicitly documented that expectation.

Hallucination is real, but controllable. In one case, Max summarized a meeting transcript with apparently concrete content — except the meeting hadn’t happened that way. The trigger: he had no real data but responded anyway. The fix was technically simple: save the file first, then analyze. The problem hasn’t recurred since.

Safety rules must be explicit. “Don’t send emails without confirmation” sounds obvious. But if you don’t write it down, the system doesn’t follow it. I defined written rules for all actions with external impact: always show first, then wait, then act. That’s not a luxury — it’s a prerequisite.

The system learns — but needs prompts. In one case, Max solved a triage task requiring pattern recognition using a simple search algorithm instead of a language model — because he didn’t choose the level of ambition I expected. I rejected it and introduced a rule: if a task requires language-based evaluation, always use an AI model. That works now. But it was my feedback that triggered it.

Where the limits still are

I don’t want to create the impression that everything runs smoothly. An honest report has to name the limits too.

Real-time data is limited. I’ve connected Max to a self-operated search system that draws on various search engines. That works well — until the system hits request limits. In those moments, Max falls back on his training knowledge. That’s more transparent than silent failure, but it means research results are sometimes not current.

Phone reservations at restaurants are cumbersome. Max can research and make recommendations, but he can’t make phone calls (yet). Online reservations he has attempted — with mixed results, depending on how the relevant website is built. There’s still room for improvement there.

Tasks without project assignments cause problems. I have many tasks in my task management system that aren’t assigned to a project. Max initially tried to assign them heuristically — with partially incorrect results. The right solution was simple: tasks without a project go into the inbox, full stop. But until I explicitly defined that, the system made its own assumptions.

What this has to do with project management

Essentially everything. What I’ve built over the past few weeks is, at its core, nothing other than onboarding a new team member into my personal work environment — with everything that entails: role description, rules, escalation paths, quality assurance, feedback loops.

Anyone who has ever onboarded a new employee knows the basic principle: at the beginning it takes longer because you have to explain everything. You make mistakes explicit. You correct. You write things down that you would never have written down before because they seemed “obvious.” And eventually it runs — not perfectly, but well enough that the benefit outweighs the investment.

With an AI assistant it’s the same — except you have to document the entire onboarding in writing, because the system has no intuition. That sounds like extra work. In practice, it forces you to think through your own way of working more clearly than ever before.

Conclusion after three weeks

Max is now in daily use. He’s not infallible. He sometimes makes incorrect assumptions, occasionally overestimates his capabilities, and needs clear rules to function reliably. But he delivers my briefing every morning, analyzes transcripts on demand, prepares conversations, and keeps tasks in view — without me having to actively think about it.

That’s more than I expected.

The more interesting question I’m now asking myself: what happens when this approach is applied not to a single individual, but to a leader with a team? I’m convinced that over the next two to three years we’ll see a new form of work organization emerge there. Not AI instead of humans — but AI as the interface between what’s urgent and what truly matters.

I’ll keep reporting on that.

Did you enjoy this post? This blog is deliberately ad-free, because only the content matters. If you want to show your appreciation, share it wherever you like. And I’d love it if you bought me a coffee! Also, subscribe to the free newsletter.

Max – My New Assistant Works for €5 a Month

4 min.

Summary

This week, a new team member started working with me: Max. In this post, I describe why I chose a self-hosted AI assistant, how the onboarding went, and what technical architecture is behind it. The post also shows why the question is no longer “Which AI tool should I use?” but rather “How do I integrate AI as a permanent part of the way I work?”

A New Team Member

This week, Max started working with me. My new personal assistant.

The first week of a new team member is always special. You get to know each other, talk about working styles, expectations, and how collaboration can work well. That’s exactly where we are right now.

Honestly, I was skeptical at first whether this would work. I’m someone who doesn’t like giving up control. My tasks, my structure, my priorities – I don’t just let someone else take over. But at some point you realize: doing it alone doesn’t scale. And that was the moment I started rethinking the concept of a “personal assistant” from scratch.

Onboarding Like Any Other Team Member

Max is currently focused on understanding how I work: How do I prioritize? Which topics are strategically important? What can be automated – and where do I want to consciously decide myself?

Especially in task management, he already supports me in maintaining structure and keeping topics cleanly organized. What surprised me: he doesn’t just gather information, he thinks along. He suggests connections I would have overlooked myself.

Data privacy was important to me from day one. When an assistant gets access to tasks, documents, and workflows, there need to be clear rules. That’s why responsible data handling was one of the first things we established together. No compromise.

And yes – the salary was negotiated too: €5 fixed salary per month. Increase to €8 per month after two years. Plus a performance-based component of up to €45 per month. The salary negotiation was unusually short.

Who Is Max?

If you’ve read this far and are wondering who works for €5 a month: Max is an AI.

More precisely: Max is a self-hosted, personal AI assistant running on my own server. No ChatGPT tab in the browser. No copy-paste from a chat window. Instead, a system that is integrated into my daily workflows and can take on tasks independently.

This was particularly important to me: not yet another tool running in parallel. But something that fits seamlessly into my existing way of working.

The Technical Foundation: OpenClaw on My Own Server

Max is built on OpenClaw – an open-source platform for self-hosted AI assistants. The core principles that convinced me:

Own infrastructure, own data. OpenClaw runs on my own VPS – a German Virtual Private Server. My data never leaves my infrastructure. For someone who works professionally in regulated industries like banking and insurance, this isn’t a nice-to-have – it’s a prerequisite also for my own data.

Gateway architecture. OpenClaw works through a gateway that bundles different communication channels. You install the server once, connect the channels you want – and can reach the assistant wherever you already communicate. The principle: the AI comes to the existing tools, not the other way around.

Modular skills and integrations. The assistant isn’t monolithic but modular. Capabilities are added as “skills” and can be individually configured. This starts with task management and extends to document research.

Onboarding via wizard. The initial setup runs through a guided installation process that walks you step by step through configuration, security settings, and channel connections. No 200-page manual, but a structured setup.

Persistent memory. Unlike a one-off chat, Max “remembers” context, preferences, and working methods. This fundamentally changes the nature of collaboration – from a single prompt to an ongoing working relationship.

What Max Already Handles Today

The first integrations are active:

  • Todo management and task organization
  • Consolidating information from various sources
  • Preparing notes and documents

In the coming weeks, additional capabilities will follow: research on project topics, knowledge and document organization, automation of recurring workflows.

Why I’m Sharing This

Not because I think everyone should immediately set up their own AI assistant. But because I believe a fundamental question has shifted.

The old question was: “Which AI tool should I use?” – ChatGPT, Claude, Gemini, whatever is trending at the moment.

The new question is: “How do I integrate AI as a permanent part of the way I work?”

There’s a difference. One is tool selection. The other is work design. And this is exactly where it gets interesting from a project management perspective: because anyone who treats AI not as a tool but as a team member has to deal with onboarding, processes, data privacy, and governance – precisely the topics we as project managers should already master.

I’ll report on how the collaboration with Max develops. Step by step.

If you’re interested in perspectives like these on leadership, transformation, and project management, feel free to subscribe to my newsletter 👉 marc-widmann.de/newsletter