Behind the scenes of my Harvard class 🎓

Practical use of AI and Future scenarios - opportunities and threats of AI

Before we get started, yesterday we opened up the subscriptions to the Master in Prompt Engineering. We’re at edition 5, and every edition before this sold out in record time.

We already sold 1/3 of the tickets, so if you want to go from “I think AI is cool” to “I’m building the future” this is your chance and I wouldn’t wait too much.

You can join here.

This week I gave my lectures for the “Innovation with AI in Health Care” program for the executive education programs at the Harvard T.H. Chan School of Public Health.

I want to take you behind the scenes and share with you some of the concepts I shared during the class.

I mostly focused on two aspects:

  1. Practical use of AI (basics of prompt engineering)

  2. Future scenarios on opportunities and threats of AI (we made this specific for healthcare)

I want to show you a practical example of prompt engineering that always blows my mind.

It’s a use case for medicine. You’re basically giving AI a medical case and a few possible solutions, and AI is supposed to give you the right diagnosis for that case.

I first demonstrated with the following prompt:

Act as a doctor.

I will give you a medical case, you will give me an answer based on the options I give you.

Your answer should just be the letter of the right diagnosis.

I gave it a medical case I knew the right answer to (answer ”E”, and consistently got the wrong answer (see below).

I then demonstrated a technique called “chain of thought” (some people call it in other ways, like “show your work”). The idea is to let the AI “think” before giving a response. This is the new prompt:

Act as a doctor.

I will give you a medical case. First, you are going to think out loud about the case.

Then, for each possible answer, you will think out loud whether that answer is the right one or not.

After you've thought through all the possible answers, you will tell me which one is most likely to be the right one.

The GPT model gave the right answer 100% of the time.

This simple example is particularly interesting for a few reasons:

  1. It shows how you may think that ChatGPT is not good enough for a specific use case, but it may be that you haven’t prompted it well enough. Think about it: how many opportunities are you missing because of your lack of knowledge?

  2. You can get a lot of value from quite simple concepts. It didn’t take much to showcase this simple example, yet the impact on the participants was outstanding. I love cases like this, where a small input (just learn this technique) gives a disproportionate return (enabling a whole new set of skills, products, and capabilities).

  3. How much knowledge is “buried” in LLMs? This is fascinating (and scary) to me, as it enforces the idea that AI may be capable of much more than we think, but haven’t figured out a way to “unlock” these capabilities yet.

I want to end this article by sharing a thought that emerged from our session on opportunities, barriers, and threats. A common theme that everyone has identified is the importance of education. Without proper education, it’s impossible to ensure a fruitful future for AI, for a few reasons:

  • As you’ve seen above, AI education can make the difference between impossible and possible for many potentially life-saving applications

  • Educating doctors can help them both use AI in the best way possible, but also mitigate risks

  • Educating a diverse group of people like the 120 doctors, entrepreneurs and executives in the group ensures that different voices and points of view are involved in defining the future of AI

  • Educating policymakers is the only way to ensure regulations that balance both mitigating AI’s threats while reaping the amazing benefits AI can bring us

I don’t know whether I’ll see you one day in a Harvard class, in an AI Academy one, or somewhere else. Either way, I hope you’ll invest in your AI education because the opportunities are endless and we need you.