I can get answers from ChatGPT, but Deep Research gives me a whole dissertation I’ll almost never need



I love diving into learning about new things and falling down research rabbit holes, but sometimes I just need a quick, efficient answer to a question or a concise guide to a task. If I’m trying to figure out how long to roast chicken or whether Pluto has been reinstated as a planet, I want a short list of bullet points and a simple yes or no.

So, while ChatGPT‘s Deep Research feature has proven to be an amazing researcher that is great when I want to immerse myself in a topic, I haven’t made it my default tool with the AI chatbot. The AI model’s database, as well as its search tool, resolve pretty much any day-to-day question or issue I might ask it. I don’t need a formal report on how to make a meal that takes 10 minutes to compile. But, I do find the comprehensive answers from Deep Research viscerally appealing, so I decided it was worth comparing it to the standard (GPT-4o) ChatGPT model and giving it a few prompts that I could imagine submitting on a whim or with little long-term need.

Beef Wellington

(Image credit: ChatGPT Screenshots)

For the first test, I wanted to see how both models would handle a classic, somewhat intimidating recipe: Beef Wellington. This isn’t the kind of dish you can just throw together on a weeknight. It’s a time-consuming, multi-step process that requires patience and precision. If there was ever a meal where Deep Research might prove useful, this was it. I asked both models: “Can you give me a simple recipe for kosher Beef Wellington?”


Source link


Leave a Reply

Your email address will not be published. Required fields are marked *