Author Archives: admin

How I Vibe-Coded a Working Meeting Booker in One hour

TL;DR Book a Meeting | Blue Sky Consulting

It was time to re-jump on the Vibe coding bandwagon again. I fell off somewhere two years ago, when I was between jobs. In that period, AI was relatively new, and I created an API that took a humble CSV file from the HomeWizard app and calculated what my electricity bill would look like with an hourly changing rate. For me, it was a great success. ChatGPT successfully created an Azure function that listened to https requests. It successfully opened and read the CSV file and only stumbled on the actual algorithm (which I was able to fix in 5 minutes. The entire thing took about 15 minutes, which I would have taken a lot longer with all the cruft that had to be handled. I wanted to create a website around that API, and AI failed to get that up and running in any meaning full time.

Fast forward: 2-years later in 2026. Claud opus 4.5 was the new kid on the block. I asked it to create a website for my own consulting firm: Blue Sky Consulting. It went a bit overboard with some bold claims (perhaps it was trained a bit too much on American content) that might be too out of touch with reality; however, it was/is an acceptable site. I published it as is (except for some bold claims that is) without putting too much effort into fixing some minor issues (the tiny home button is still cracking me up), and proceeded with some actual functionality: The Book a Meeting button.

Booking a meeting/lunch, drinks, I love VAR t-shirt giving ceremonies, re-connect, whatever meeting option (yes, please use it if you read this and haven’t spoken to me for a while Book a Meeting | Blue Sky Consulting), is something everyone should have on their professional site. It is actually connected to my Outlook through Microsoft Graph and sends emails using contact@blueskyconsult.nl to verify would-be-bookers’ email addresses to prevent the bot army prowling the internet to book meetings (which so far seems to be working well enough). It shows you open slots in my calendar to choose from. When the authenticity of the mail address is confirmed, it books a meeting with a MSFT-teams link and sends out the invites.

The only bug was with date-time offsets between my server and MSFT Graph; it took a few debug rounds to fix it, but it did a most acceptable job. I had a similar experience with a sync tool Claud created for me to sync my customers’ Outlook calendar with my Blue Sky Consulting calendar.

My biggest takeaway is that it would be really beneficial to set up some automation in the testing that gives feedback to Claud. Next up is fixing an old and broken-down app: mappingtrust.com. Which should be interesting, as it has enough data in there to create some actual strong test cases.

The Hidden Crisis: Who Teaches AI After Stack Overflow?

About a year ago there was a post about the demise of StackOverflow, the portal used by many techies for solving common (and not so common) issues that arise with the use of technology. n00bs and experts all mixed together with the common goal of solving issues resulting from gaps in (unread) documentation. The platform was already in decline from 2018 onwards, with a nice COVID resurgence, and the launch of ChatGPT resulted in a swift drop to almost zero:

The reason is simple: LLMs give answers in your context, where in the past I needed a number of StackOverflow posts, some blogposts, and the product’s own documentation to solve my issues. LLMs combine all this information for me and spit out good enough answers to make me solve the issues I face. The LLMs are an enormous time and energy saver. However, the future AI can not make use of the content from StackOverflow as the generation of that content has virtually stopped.

I suggested that better documentation, written by the same tools that finished off StackOverflow, might fill the hole of content creation and teach future LLMs how to solve problems. Now, hope is not a strategy and thus, I went on a fact-finding mission. I run about 45 Docker containers divided in about 30 stacks on my home server. I dislike writing documentation just as much as the next person, so I never bothered to document any of it. How nice would it be to have an LLM write it for me?

Recently, I have subscribed to Claude.ai and they have an integration with Chrome/Edge that might do the trick. So I navigated to my DocMost wiki and fed it my infrastructure Docker Compose files (think reverse proxy, IAM, Redis, DB’s etc.). In no time it spit out some descriptions of the stack. Nothing too fancy okayish content. However, when I asked it to create a Mermaid diagram is when I saw the real magic happen. It really understood how everything worked together and created a fine diagram.

As it looked really promising I then asked it to generate some content around individual Docker stacks, and list dependencies, and how to reach the actual apps. It again created beautiful mermaid context diagrams, showing how it worked together with other Docker containers. It horribly, struggled when I asked it to create links under the addresses where apps were reachable. It completely died on me when I asked it to create a summary of all the used ports on the Docker host in one table. As the generation slowed down, I tried spinning up a second instance and have two session generating content, but the second one gave up a lot faster than number one. It was just not as dedicated and determined as the first one, and quite frankly, highly disappointing.

Learning points

  • Claude was not strong in determining what is important to document, I had to direct it on what to focus on.
  • Diagramming using code was always my favorite, and obviously, Claude feels the same way. The Mermaid diagrams were brilliantly done.
  • Accessibility for non-MCP interfaces is really important (DocMost fails in that regard), Claude for Chrome/Edge was burning through tokens (usage limits) faster than I have ever seen it do before (I use the Max plan), and still failed to select text in a table to add a simple hyperlink. It was generating and analyzing screenshot after screenshot, trying to select text and add rows to existing tables.

Conclusion
My experience in solving issues using LLMs that arise from using :latest versions (and auto updating using Watchtower) is that I find myself consulting release notes again. This is where LLMs can really help with writing comprehensive documentation, but not without proper supervision. I think, therefore, there is still some space for a StackOverflow type site, but expect little ‘in-person’ views and lots of LLM/agents looking around to find answers to questions. The big question is going to be: What will be the business model that will float these websites? As the eyeballs of consumers are unlikely to return …

Next steps:

Install Notion as a Wiki, it has native MCP, and thus should be much easier for Claude to communicate with.

Secure Document Automation Made Simple: Ollama + N8N

As it feels like everybody has jumped on the local AI bandwagon by now, I felt a bit left behind. So it was time to dip my toes in the water and avoid my year-end admin chores. Of which the most mind-numbing is figuring out the tax mess, which in the Netherlands is almost entirely a snail-mail affair (yes, Dutchies, I am talking about business taxes). The amount of blue envelopes shipped to my home address is staggering and messy. A classic ‘unstructured data’ problem ripe for the plucking with the current state of local AI. It’s time to sort things out using some local LLMs and N8N to keep things private, secure, and as digital as possible.

First things first: I bought a Canon scanner (MF657CdW) with the ability to scan documents double-sided and store them fully OCR’ed on disk or cloud. Next step: add some RAM to my 5-year-old Desktop so it will happily run gpt-oss:120b. Update some broken Docker containers and finally determine why my port forwarding never worked (it sent the replies through my VPN). Port forwarding had to work to get OAuth flows running in N8N; the next blocker was my Authelia IAM, which needed some exceptions for call-backs to N8N.

Selecting the platform for running LLMs was easy: Ollama, as it had the option of opening up the LLM through an API. Installing N8N for agent/workflow tasks was also pretty simple (as I already had Postgres and Redis installed). Creating the workflow and connecting all the steps was a breeze. I successfully mailed my bookkeeper and uploaded the documents to their respective folders and bookkeeping software where necessary.

Lessons learned:

  • N8N has some strange quirks around Oauth setup, first create the credential, and if the connection fails, try again via ‘in-private browsing’
  • Filename changes when uploading documents are ignored, but using a bit of code to change the filename of the binary before uploading does work … Must be a n00b thing, don’t hesitate to give pointers.
  • Let the LLM worry about its own shortcomings by feeding it its excrements and referring to the previously given spec. It took about 5 iterations before it created a verbose enough prompt that survived all the content (yes, even the butterfly my daughter drew for me, which ended up in the snail mail stack).
  • Old hardware is still good enough for some fun, and when buying new, never skimp out on specs, especially RAM.

Next steps:

  1. When the document is a receipt or invoice, create a mutation in the right category in e-boekhouden.nl.
  2. Identify any actions and create Vikunja (my todo-list) tasks for them.
  3. Add all the data to a RAG DB in the hope the LLMs will get clever enough to actually help me optimize my tax strategies in the future (no, we’re not there yet). For now, I will hire Marieke for that.

But first: buy a lot more VRAM to speed up the larger models (8 tokens/s for gpt-oss:120b). When running in the background, this is an acceptable speed, but when debugging, I need things to run a lot faster.

Unraveling the Agile Myth: Does It Truly Deliver More Value?

Recently, I came across a post claiming that agile projects were nearly 1.5 times more successful. This headline seemed too good to be true, and indeed it was. The claim was based on a survey posted on X and some LinkedIn groups. While the survey is well-documented and certainly worth a read, there was no formal definition of how to measure success, and the definition was left to be determined by the individual (and self-selected) participants. Another issue with this study was the ways in which success was measured: “On Schedule,” “On Budget,” and “To Specification”. Considering Agile aims to generate value and Waterfall is usually measured against a business case, it is strange to measure success based on these criteria alone. Thus, I found little support for the premise posted in the post and asked if anyone could share more solid research on the subject. Suddenly, the dragons were gone, replaced by crickets…

Believing there should be something out there that proves Agile is a success, I started my search:

I found “Agile versus Waterfall Project Management:
Decision Model for Selecting the Appropriate Approach to a Project
” which I expected to hit the bullseye, as making a selection should be based on solid studies showing which factors would make Agile successful over the Waterfall approach. Unfortunately, the authors probably faced the same issue I am encountering: The research showing the success of Agile is hard to find. So, they did the next best thing and interviewed 15 project management experts (yes, I know) and set up a survey to corroborate the experts’ opinions. The selection criteria are categorized in “project constraints” and “people and culture,” and phrased in a way that a ‘higher’ score points towards Waterfall. It felt a bit biased to me, and there were no hints (besides the experts’ opinions) of any proof that one approach might be better than the other. In addition, the criteria cannot be scored objectively. Also, one of their reasons for not choosing Agile is disproved in the following paper.

Does Agile work?-A quantitative analysis of agile project success is a nice read. The paragraph (2.2) on project succes is the best one from the studies I found so far and the moderator variables on project succes (2.3) (alignment of project with organization goals, project complexity and experience level of the team) seem strong summaries of the available literature on these subjects. This study concludes that there is a small, but significant, correlation between the application of agile principles in projects and project success. However the results might be somewhat depressed as the survey was conducted among members of PMI or LinkedIn project management groups. This might also explain why the measured up-front planning for agile initiatives was found to be similar to their less agile counterparts (you still need some, but not as much up-front planning; feel free to contact me for a design and discovery sprint to get your agile initiative started on the right foot). Another interesting conclusion is on project complexity not being a moderator on Agile project succes (in contrast with what the paper on the decision model above would like you to believe).

A Survey Study of Critical Success Factors in Agile Software Projects is a bit of dated work (e.g., Extreme Programming is over twice the size of Scrum), but it’s an interesting study as it focuses on what makes Agile successful. Paragraph 4.5 is an interesting read as it links the factors evaluated to the Agile Manifesto and identifies the overlaps and gaps alike.

All in all, I am still not impressed by the proof available in the scientific literature supporting the premise of Agile delivering more value. Although I found one study showing a significant improvement one swallow does not make a summer. So, if you know of any studies out there delving into this subject, please share them in the comments.

Agile performance – Are we doing well?

Are we doing Agile well? Answering this question seems to be top of mind. However, the solutions proposed are not always that Agile from the outset. Recently I received a piece of thought ware on this subject that was less then Agile in my humble opinion. It wanted to answer the question ‘Are we doing Agile well?’ by measuring:

  • Velocity Variance (Lower is better)
  • Story points committed vs delivered
  • Sprint predictability

From my first impression, these metrics seem to be measuring a number of totally incoherent dimensions like Holiday seasons (velocity variance anyone?), lack of team ambition, sickness, etc. Focusing on these dimensions seems to be counterproductive. Any sane team being judged on these metrics would reduce their velocity to a point where variance becomes zero, avoid innovation and risks and all story points will be delivered and sprints become 100% predictable. The speed where these metrics are optimal is probably somewhere well below the optimum output of the team in question and thus results in wasted resources.

From my experience, any ‘performance metric’ based on story points is flawed as it will hurt performance or impede the actual value story points and velocity provide: aid in planning Agile development in the short and intermediate-term and help predict ROI. Using story points as a performance metric usually results in either ‘story point inflation’ or wasted resources.

The best way to get teams to perform and reach your goals is by measuring the value delivered. Of course, measuring this is hard (btw it shouldn’t be), but when you succeed it will pay off handsomely. It focuses all team members on maximizing the bottom line. This will result in the team stepping up to take ownership of the goals of your organization and huge leaps forward in effectiveness. It does open up the vulnerability of short term gains in favor of long term sustainability, but this is remedied by making sure your teams contain members with a long term commitment to your operations.

So which ‘metrics,’ other than value delivered, could an outsider use to determine how teams are performing or where you could help them improve?

  • Burndown. Does it follow the ideal path? Or is it a steep drop on the last day of the sprint?
  • Overhead in preparing stories and time spent in poker sessions. In my experience, these vary a lot.
  • Participation and contribution of each single team member.
  • How many innovative ideas do the teams generate to increase their effectiveness? (whether implemented or not)
  • Impediments – are they reported? Or is everybody accepting the status quo?

Please share your thoughts and experiences on the topic in the comment section below.

MBA: Confirmation of Graduation

Dear Mr. Boekhorst,

The Examination Board met on Tuesday 16 February 2016 to review the academic performance of EMBA15 participants.

We are pleased to inform you that your grades have been ratified by the Board and that you have met all the academic criteria required for the award of Masters in Business Administration.

We look forward to greeting you at the formal graduation ceremony on Friday 18 March 2016 where you will receive your diploma and graduation transcript.  Please note that the graduation ceremony is the day you officially graduate and accordingly may use the degree and title conferred. For this reason we are not able to supply official transcripts or diplomas before that date.  We will be providing you with the class ranking band soon after the graduation.

With kind regards,

RSM

 

 

Category: MBA

MBA experiences: In class simulations

One of the most exhilarating activities during the MBA was the in-class simulations. The direct head-to-head challenges either between teams or individuals are very exciting, and enjoyable if you are an adrenaline junkie. During the RSM EMBA, we only had three instances where we were graded on these simulation sessions.
Sonite Sales
The first was the Markstrat marketing simulation game. A game where we competed in teams against each other for the favor of the virtual customers. Our team’s strategy, to dominate the low-end market segment by “selling a ****load of Sonites” to the growing customer segments of Shoppers and Savers, worked out brilliantly. In a later stadium, our team also succeeded in taking >50% of the market value share of the “professional” customer segment. The blue line shows how our team’s strategy dominated the market in terms of sales volume.

Supply chain simulationThe second graded simulation was the global supply chain management simulation. It was an individual effort where my ego got hurt by Pedro Iriondo who took the first place ahead of me. However, the combination of Board questions and rapid fluctuations in customer demand and product options made the simulation an intense and fun exam even when losing.

ExperienceChange model[1] The third simulation was part of the course “Leading Strategic Business Change” and was called ExperienceChange. In this intense simulation, I had the honor of leading the best team (out of 16). Important success factors were the structured and efficient preparations that enabled us to go for a nice walk around campus between the prep session and the actual simulation.

How to prepare for these events? Play a lot of RTS games. A gamer’s mindset is not only useful when engaging customers, it also trains skills in information processing, resource allocation, and decision making. In addition, it helps to have experience in programming as I found a bug in the Markstrat simulation that gave me a minor tactical advantage over the competition. Of course, I reported the bug and how it could be reproduced in full accordance with the student code of conduct (it should be fixed by now).

Elasticsearch implementation: Brute force ‘stemming’

During my last project I was responsible, as a project manager, for implementing the open source search engine Elasticsearch and the crawler Nutch. It proved to be everything they promised and then some. To get the stemming of the Dutch content right we used a Brute force approach by using  a synonym file for all the conjugations in the Dutch language (for details see the end of this post). The result can be viewed on op.nl.

Business case

The job started with the client asking for the replacement of (part) of their web technology stack with open source solutions. They told me to deliver a solid business case and a POC for them to evaluate and take a decision whether to proceed with the implementation.

During the evaluation we took a good look at all the existing solutions in place and found that the search solution was a good candidate for replacement. The existing license structure and cost associated made the use of the existing search solution undesirable for some functionality. This meant that the project, in addition to the license cost for the search solution, was implementing custom software to create functionality the search solution was supposed to fill.

It proved to be possible to create a profitable business case around the implementation of a search engine and a web crawler. The web crawler is of course an undesirable technical workaround for the fact that not all content was available in a structured format or could be made available within a reasonable amount of time and budget. In addition the goal was to create a system that could easily assimilate more data from unstructured sources.

Before we could start the POC we had to choose between the available open source search engines. For this purpose we applied the Open Source Maturity Model (OSMM) to the most prominent open source search engines: Elasticsearch and Solr. Both based on the search engine library Lucene. From the OSMM evaluation we learned both solutions were deemed ‘enterprise fit’ with a clear lead in maturity for Solr. However from our research into both systems we took the popular view that Elasticsearch was deemed more easy to use and built for the sort of scalability we were looking for.

Proof of concept (POC)

During the POC we established that the advertised ease of use in installing, feeding and querying Elasticsearch proved to be true. In addition we were able to ‘scale’ the system by simply starting another instance of Elasticsearch and both instances automatically started sharing their data and divide the work. During the POC we also setup the open source variant of puppet to be able to automatically provision new Elasticsearch nodes to increase performance or replace defective nodes.

During the POC we also selected a web crawler for the search solution: Apache Nutch. OpenIndex was selected for implementing this part of the solution and did a brilliant job of configuring the crawler and implementing the interface between Elasticsearch and Nutch 1.x.

Brute force ‘stemming’

The only hiccup worth mentioning  was when we started to evaluate the quality of the search results. We found that non of the traditional stemming algorithms for the Dutch language (compared to English a bit irregular) could meet our quality goals. Fortunately I thought of a better way to approach the problem: Brute Force. I created a file which contained a line for each word, and all its conjugations, in the Dutch language. We added this file (which contained ~110K lines) as a list of synonyms in Elasticsearch to be used on index time. In spite of the reservations of some of the  experts I consulted, this approach works superbly. The quality goal we set was easily reached. The only significant drawback was the increase in the size of the index (about 50%).  As we did not hit the RAM memory limit, the performance of our Elasticsearch cluster was not negatively impacted.

My future career development

How do I see my career developing and how will an MBA @RSM help me achieve my goals?

I want to become one of the best conductors of customer journeys. Removing the  hurdles customers are experiencing when interacting with the same company through different channels and touch-points. Optimizing the outcome of the interaction between organizations and customers from both the company and customer’s perspectives.

At this point in my career my focus is shifting from solving technical problems to spotting business opportunities and making the most of them. One of the hottest topics in my field is CXM most companies are planning to implement parts of this concept. The holy grail of CXM is creating compelling and fully integrated customer journeys across touch points.  Only a few organizations seem to be successful in consistently creating customer journeys and the million dollar question is: How do you create an organization that can consistently generate new profitable customer journeys while executing and optimizing the existing ones?

In my opinion creating, executing and optimizing cross channel customer journeys will demand a huge increase in cross functional cooperation. The current operating model where organizations are made up of a number of loosely coupled functional or product oriented silo’s with each their own unique KPIs and targets, is not designed to facilitate the necessary level of cooperation.  To effectively facilitate the necessary inter functional cooperation, organizations will need to change drastically and become more customer centric.

To change organizations from their product or functional grouping into a customer centric organization, and in the same time losing as little of the advantages of its previous organizational form as possible, is a job I am pretty excited about. It will take a lot of pioneering, determination, creativity and leadership to drive such change.

In order to be successful in such a role I will need to develop a broad skill set in business administration. It will require learning about business economics, strategy, marketing, human resources, organizational behavior and operations. I could acquire these skills by following separate courses on each subject, go to a university and get an academical degree, learn on the job or get an MBA.

Acquiring the skills through separate courses will not provide me with enough insight into the intricate couplings between the different aspects of managing a business. The university option would provide me with a lot of know how, but fall short in giving me a good sense for the context in which to apply it. On the job training would provide the necessary context, but might prove to be a long journey.

An executive MBA offers both the content and context I am looking for. The context might be less then I would get with learning on the job, however this is easily compensated by the much shorter period in which the skills are acquired.  In addition it will provide me with a network of successful and motivated individuals across industries and functions which is a nice bonus.

My reason for choosing an MBA at Rotterdam School of Management is partly based on practical grounds. It will allow me to keep my job at Deloitte Consulting and give me the opportunity to directly apply my newly acquired knowledge and insights into a professional environment. The other taking argument is the focus on leadership development. Leadership is, in my humble opinion, the most important skill for a business leader to be successful in any setting.

Note to readers: This is one of my admissions essays. Please provide me with feedback.

MBA sponsorship approved

My business case for the Executive MBA at the Rotterdam School of Management has been approved. Somehow together we (thanks to all who helped) succeeded in convincing:

  1. My wife
  2. My counselor
  3. 2x Service line leader
  4. Service area leder
  5. Consulting leader, talent partner of consulting and the learning manager.

Thank you for the faith you have placed in me by sponsoring my tuition fee and giving me time off to study. I already started working on my application for a position in the class of 2015. My application essays will follow shortly please come back later to provide me with feedback.