This is where I share ideas from my teaching and research, especially the ones that relates to real-world business challenges. I write about topics like startup operations, technology and innovation management, and how AI is being used in today’s workplaces. My goal is to make research insights useful and accessible. Hopefully you'll find them interesting whether you’re building something new, managing change, or just curious about how these ideas play out in practice. I view writing as a way to structure my own thinking. So while I write primarily for myself, if these reflections help someone else along the way, that’s a welcome bonus.
Small businesses often run with tight budgets and lean teams. They might not have a specialist for every task and many people are forced to wear many hats all at once. LLMs can’t brew your coffee or close a sale in person, but they can take over many time-consuming tasks. This can help free up time to focus on high-level work where your expertise creates the most value. Basically it’s like having an assistant work on the boring stuff.
LLMs can help with most tasks that involve text, think reading or writing. Here are a few everyday tasks where an AI assistant can lighten the load:
Drafting Emails: Have rough bullet points or a quick idea of what you want to say? An LLM can turn your rough notes into a polished, professional email in seconds.
Summarizing Customer Feedback: Instead of manually reading through a stack of customer reviews or survey responses, you can ask an AI to pull out the key points. For example, it can take a lengthy customer review and boil it down to the main pros, cons, or suggestions.
Creating Content: Need a blog post outline or some social media updates? LLMs are great brainstorming partners. You can prompt them to generate a few Facebook post ideas for your new promotion or draft a paragraph introducing your latest product.
Answering Common Questions: Tired of typing out the same answers to customer inquiries? AI can assist here too. An LLM can be used to power a simple Q&A chatbot on your website. It’s capable of answering common customer questions so customers get instant info while you reduce repetitive work.
In short, an AI helper can handle a lot of the writing and reading tasks that normally eat up your day. But I would caution that these tools aren’t about replacing humans, they’re about offloading the busywork, not the core value you offer your customers. There are several dangers of missusing LLMs. For example, you customers may find you inauthentic if you don't at least proof read AI's work before sending. Whether an audience is willing to read or listen to AI-generated content is beyond my expertise. I'd like to focus on a different issue which I feel like is missing from the conversation: data privacy.
Using AI sounds great, but you might be wondering: “Is it safe to feed my business information into these tools?” If so, it’s a smart question. Most popular LLM services (like ChatGPT) run in the cloud. This means when you use them, your data (the prompts you give and the answers you get) is sent over the internet to some company’s servers. And that can be risky for privacy and security.
Think of what could go wrong: If you paste a customer’s email or a client’s document into an AI tool, where is that data going? Who can see it? In some cases, it might be stored or even used to improve the AI’s training. There have already been cautionary tales. For example, Samsung had an incident where engineers accidentally leaked sensitive internal code by pasting it into ChatGPT. The company quickly banned employees from using such AI tools for confidential work. The last thing you want is your business’s private info or your clients’ data floating around on the internet.
Beyond the general “oops” scenarios, there are also laws that enforce how you handle personal data. If you’re in Europe or deal with EU customers, you’ve probably heard of GDPR (General Data Protection Regulation). GDPR basically governs how personal information should be used, processed, and stored. It’s the reason you see all those cookie consent banners and why companies are careful about user data. In fact, Italy’s data protection watchdog temporarily blocked ChatGPT in 2023 over privacy concerns, saying there was no legal basis for OpenAI to mass-collect and store people’s personal data for training the AI.
If you’re in healthcare or handling medical info, HIPAA is another big one. HIPAA is a U.S. regulation that protects patient health information. Under HIPAA, you must be extremely careful about any private health data. Plugging patient details into a third-party AI service could be a serious violation. Experts have warned that even accidentally putting protected health information into a tool like ChatGPT can count as a HIPAA breach. In other words, if you asked an AI to draft a summary of a patient’s visit and included identifiable details, you might be breaking the law and risking hefty fines.
For any industry, it boils down to this: if the data is sensitive or private, you need to think twice before using a cloud AI service. You don’t want to inadvertently expose customer addresses, credit card numbers, personal complaints, or confidential business plans. Aside from upsetting your customers, it could land you in legal trouble if it violates regulations.
At this point you might be thinking, “I get that's privacy is important, what can I do to compete against competitors that freely use AI?” The good news is that AI tech is evolving fast, and new solutions are emerging to tackle exactly this problem. One of the most promising trends is local AI models. Basically it means running the AI on hardware you control (like a computer at your office) instead of a cloud service. The best part is there are free options.
Imagine having an AI assistant that lives on your laptop or a small server in your back office. When you use it to analyze a document or draft an email, the data never leaves your possession. Because the AI model runs locally, any sensitive customer info stays on-premises (on your machines) and isn’t sent out to the internet. This setup hugely reduces the risk of leaks. It also makes it much easier to comply with regulations like GDPR or HIPAA because you’re not sharing data with a third party.
As recent as a year ago, the idea of running a powerful LLM on a normal PC would’ve sounded crazy. These models required massive supercomputers. But AI research is moving so fast it's hard to keep up. Tech companies and open-source communities have been releasing smaller, more efficient models that can run with modest computing power. Open-source LLMs (which are essentially “free” models that anyone can use and even modify) have exploded in number since 2023. In fact, on-premises (local) AI solutions already make up over half of the AI deployments in some areas, and that share is expected to grow. The quality of these local models is improving quickly too. We’re seeing the gap between these open local models and the big cloud AI services narrow every month.
To be fair, today’s local LLMs still might not completely outshine the likes of ChatGPT in every scenario. Some might be a bit slower or less polished in their responses, especially if you’re running them on standard hardware. However, the progress is very encouraging, and for many routine uses (like drafting emails or summarizing reports), a local model can do an impressive job. Plus, you get peace of mind knowing that nothing you input is leaving your own environment. It’s a trade-off many small businesses are willing to make for the sake of privacy and control.
There's a clear trajectory toward an era where you can have your own “in-house” AI that’s both capable and compliant with privacy needs. The next big challenge is making these local AI tools accessible and easy to use for everyday business folks. Right now, setting up a local LLM might require some IT know-how. This includes dealing with complicated installs, maybe some coding, and having the right hardware. Not every small business has an IT team on standby, and you probably don’t want to spend a weekend configuring AI software.
In summary, LLMs offer huge benefits to time-strapped small businesses. By using them carefully (and responsibly) you can boost productivity without risking your customers’ trust. Cloud-based AI tools are powerful, but they come with privacy considerations. Fortunately, the rise of local AI models means that soon you won’t have to choose between the convenience of AI and the confidentiality of your data. If you have an interesting story about AI deployment in your businees, or would like to chat with a researcher in this area, you are very welcome to reach out! I can be contacted using the message box at the bottom of this page.
All the best!
That wasn’t an isolated case. In my experience interacting with many technically talented entrepreneurs, this story (or some flavor of it) happens all the time. Perfectionism and over-attention to detail keep a lot of awesome innovations locked in the garage. It's like having a shiny sports car but never taking it out for a drive because you're afraid a little dust might land on the paint. The irony is that while we're busy perfecting, someone else with an “okay” but available product might swoop in.
So, how do we break out of this pattern? Let me introduce the concept of the Minimum Viable Product (MVP for short).
The Minimum Viable Product is the most basic version of your product that still delivers the core value to your customers. Think of it as the first flight for your idea, just a dependable little plane that can get off the ground (or a very basic canoe if you prefer nautical metaphors). Your MVP isn’t supposed to be feature-packed or polished to perfection; it’s supposed to be usable and useful in its simplest form.
Why start with something so stripped-down? Because an MVP lets you launch early instead of spending forever building the "perfect" product in isolation. It might feel uncomfortable to put out a product that still has rough edges. (That's how I feel about sharing my writing too.) But here's the thing: you can’t fix what you don’t know is broken, and you won’t know until real users try it.
By launching an MVP, you’re essentially saying, “Alright world, this is what I’ve got so far. What do you think?” It's a conversation opener with your market. You’re not married to your MVP’s initial feature set or design; you’re just testing the waters. And in the fast-paced startup world, testing the waters early is far better than sinking in secret. If you wait until your product is flawless, you might find that the world moved on without you.
One of the biggest advantages of releasing an MVP is that you start getting real feedback from actual users (and hopefully paying customers). Until someone who's not your mom or your best friend uses your product, everything is basically just educated guesswork. You might think feature X is the greatest thing, or worry that your button color will make or break user experience – but you won't truly know what people care about until they tell you with their actions.
Early customers might love parts of your product you thought were minor, and they might ignore or struggle with the parts you obsessed over. And that's good information to know! Every piece of feedback, every “I wish it did this,” or “I got confused here,” or “This part is awesome!” is a clue guiding you toward a better product. Real-world input is a tool for growth, not a personal critique of your abilities. In fact, getting criticism or bug reports early on is incredibly useful. It's like having a bunch of free advisors telling you how to make your product better.
Early feedback helps you iterate. You can improve your product based on what real users actually need or want. That beats imagining what users might want and potentially building the wrong thing. Even in large corporations I’ve seen teams spend months perfecting a feature that no one ended up using and have been guilty of it a well. On the flip side, I’ve seen teams launch something basic, listen to users, and then quickly add the features that people were actually asking for. The latter approach not only save time and money, but it made users feel heard and involved.
Beyond just feedback on features, launching early gives you something even more profound: a reality check on your whole idea. This is often called market validation. In plain terms, this is proof that someone out there genuinely wants what you're offering, and ideally, proof that they’ll use it or pay for it.
Let’s be honest, we all think our ideas are brilliant (it's our baby, after all). But the ultimate confirmation doesn't come from our own head or even from investors or mentors saying "nice idea". It comes from real users voting with their wallet. Conversely, if you put your MVP out there and hear crickets, well that’s incredibly valuable information too. It might sting, but now you know that something about your idea or execution isn’t hitting the mark, and you can go back to the drawing board before you've poured years into it.
Think of market validation as research that’s backed by actual evidence. Even a small amount of early adoption can be the boost you need. Did 50 people download your app in the first week with minimal marketing? Fantastic! That means 50 people have a problem your product might solve. Did your first two customers for your service stick around for a second month? Then you’re delivering real value to them. Those little wins are huge. They tell you “Yes, you’re on to something, now keep going!” And if the opposite happens and nobody shows interest at first, that's a sign to learn and adapt. Maybe you need to adjust your solution or target a different niche. Either way, you’ve gained insight that you simply wouldn’t have by waiting in stealth mode.
In short, validation from the market is one of the most valuable forms of confirmation you can get as a founder. It’s the difference between thinking you have a great solution and knowing (because people are using it). But you only get that validation if you actually put your product out there.
I get it, releasing something that isn’t polished to your standards is scary. Your product is personal, and exposing it to the world means exposing yourself a bit. But here’s a comforting thought: every successful product you know and love started out a little rough. The first versions of big-name apps and services were often laughably simple or flawed. The difference is, those founders put something out, learned, and kept improving it. They didn’t wait until they had a multi-million-dollar platform with every bell and whistle. They started with a version 1.0 that made them a bit nervous, and then they hustled to make version 1.1 better.
I forget where, but I've heard someone say: “If you’re not embarrassed by the first version of your product, you’ve launched too late.” In other words, feeling a bit embarrassed is normal. It means you actually got the thing out in front of people in a timely manner. You can always fix bugs, add features, and polish the design later, but you can’t regain lost time or missed opportunities. That competitor that "stole" my friend’s idea, they weren’t necessarily smarter or more talented. They were just willing to launch sooner with a basic product, and then they improved it with real user input. Speed and willingness to iterate trump perfectionism in the startup world.
So, to all the business students and entrepreneurs reading this: give yourself permission to launch early. Treat your startup like an experiment. Your MVP is not a reflection of your worth or the final judgment on your idea. When you release that early version into the wild you’re saying, “Hey, I think this might solve a problem for you. Try it out and let me know what you think.”
Real-world input is your friend. Let your users become part of your product's story. Listen to their praise and complaints with an open mind. Use that free R&D to guide what you do next. You might be surprised how supportive early adopters can be when you’re transparent that this is an early version and their feedback will help shape what comes next. People love being part of a success story in the making.
At the end of the day, an idea sitting in a workshop (or garage, or your computer) isn’t changing anyone’s life. The real validation, learning, and improvement only start once you launch that thing. So, launch it. Even if it’s a bit scrappy, even if not all the lights are green yet. Launch it, and then keep making it better with your users along for the ride.
If you've reached this point, you may have suspected that I've written this post for a very specific audience: myself. I've been redesigning my personal website, and associated blog, over the past weekend and have been reluctant to publish it online. Maybe after I write a dozen blog posts I could launch it? With the old adage of do as I say, not as I do, I had to say it to myself so that I actually launch. I decided to publish my writing to keep myself accountable, and to illicit some feedback as well. That's why you'll find a comment submission box at the bottom of this page. I promise to read everything sent, and will do my best to respond to any comment that is respectful and kind.
Until next time!
Reach out with questions
If you have any questions or comments, I invite you to reach out. If you add your email address I'll be happy to get back to you.