Let’s talk about my recommended “Do and Do Nots” of using AI for the MCAT.

Planning on using AI to help you studying?

How AI can potentially help:

  • While AI models can potentially make mistakes, the level of depth required for the MCAT typically isn’t enough to cause any of these mistakes to arise in models like ChatGPT. Language models seem to most often make mistakes in how we apply scientific knowledge to our real lives as humans (like when giving out medical advice), but as far as simply explaining what a magnetic field is in manageable terms, it’s fine.

    Using AI to explain a bit of content that refuses to make sense to you is a good use of AI, just make sure that the explanation you’re given at least aligns with what you were expecting, and please feel free to do your own Googling/asking around if you feel the answer is incorrect.

  • If you’re not super tech-savy, AI can help you build an organized document to track things like questions you’ve missed, an easily trackable schedule, things of that sort.

  • AI can do a pretty good job of creating practice problems that can reinforce your content knowledge or create problems that will pretty accurately test the math you’d need to do on the exam, but you’ll need to be able to verify the answers on your own.

    AI can explain content pretty well, but the second you ask it to get creative, it sometimes doesn’t understand how to solve the problems it has made for itself. For example, there are a lot of times that we pick answer choices on the MCAT due to nuance such as, “these two answer choices are correct but only this one actually applies to the passage/answers the question”, things of that sort, that AI models are inconsistent in their ability to pick up on. There are a few other examples I’ve noticed that it struggles when evaluating the questions that it’s made for you, but long story short, AI is good at creating problems, but only do this if you feel comfortable actually evaluating whether or not AI’s “correct answer” is actually correct.

    Realistically, I’d recommend going to Khan Academy or another free resource first.

    But if you’re set on using AI to make practice problems for you, I recommend feeding it an example problem first, and asking it to make similar examples, rather than coming up with random questions on its own. It’s great for testing content, but the second you ask it to make MCAT style questions is where things can possibly go off the rails.

  • If you can tell the AI model what the correct answer choice is, it can sometimes do a good job of providing an explanation that the question-maker of whatever your resource didn’t do a great job of. If you don’t provide it the answer choice, you’ll find pretty quickly that it makes mistakes here, but if you feed it the answer, it’ll search through scientific literature to prove the right answer and you can ask it to do so in a way that makes sense to you.

    This, to an extent, can apply to CARS as well. If you ask the AI model to stick strictly to the information presented in the passage, as well as telling it what the correct answer is, it can provide explanations for the correct and incorrect answer choices that might make more sense than what you were given.

  • I personally do not trust AI to create practice problems for CARS, nor do I think you should either.

    But if you’re struggling with reading comprehension and summarizing passages, you can ask AI models to create you a passage about a specific topic you struggle with (and can ask to make it more wordy/vague if you want) to get practice summarizing MCAT-style CARS passages.

    Again, it’s not good at creating and answering questions on CARS alone, but it can at least create the passage for you to practice reading.

  • You’ll find pretty quickly that I’m recommending to be super careful when actually asking ChatGPT to analyze something, but since it is a language model, it can be somewhat creative in creating random, different strategies for reading through passages in CARS that might for some reason click for you. You can even give it your strengths and weaknesses as a test taker to see if it can recommend something to compensate. If the strategy doesn’t work, then don’t do it, which is why I don’t think this is at all a dangerous use of AI for the exam.

Where I think AI falls short

  • I’m sure you’ve heard stories of professors asking their students to take a quiz using the help of AI, just for the students to do poorly because AI struggles with answering questions.

    While I have found that AI models (specifically ChatGPT) are actually generally pretty accurate, they aren’t 100% accurate when you ask them to answer a practice problem for you, which is enough for me not to recommend it. Even if AI misleads you only on one question for every 20 questions, that’s still not a good thing and is not worth the risk.

    In evaluating questions, I’ve found that AI truly is only helpful if you can feed it the answer.

  • While I think you’d be shocked at how good AI can be at evaluating figures, it still struggles with analyzing things like UV-Vis Spec, some bar graphs, things that are a bit more vague in what they’re showing you. So unless it’s a super simple Western Blot or something like that, it’s probably going to make mistakes interpreting data.

    If the passage is pulled from an actual medical research paper, sometimes it can get around this by literally looking up the paper and reading the explanations of the figures (that are removed by test-makers to force us to analyze them), but even still, not recommended.

  • To put it plainly, ChatGPT has no clue of what will show up on the exam, and neither does anyone else. I think this is intentional by the AAMC, if they released a specific list of what could show up, everyone would know exactly what to study. Doesn’t matter if you or I agree with that, but that’s potentially their reasoning for it.

    So I don’t recommend asking AI to decipher whether or not some bit of content or some question is realistic to what will show up on the exam, because it’s just looking up what other people have said and tends to go way overboard in many regards (it was once claiming to me that we needed to be able to use and memorize the equation Heisenberg’s Uncertainty Principle and take derivatives of the equation to solve for other crazy things I had never heard of).

    To emphasize that point, any time I’ve ever tried making practice problems for the exam, I’ve always fed it my content document and told it to only ask questions based on information in that document, and it still sometimes struggles to stick to the script (which is why I’ve abandoned it).

  • For many of the reasons I’ve already stated, any time you ask AI to create practice problems that aren’t simply for understanding content, but are in the style of the MCAT, things tend to go wrong pretty often, unfortunately. There’s a lot of errors in reasoning that it can make, or accidentally creating two correct answer choices, things of that sort, that AI struggles to understand why they are or aren’t ok.

    So it’s realistically just not worth it.

The Necessary and Obvious Disclaimers about AI

I’m not at all going to be the one who’s telling you how you should or shouldn’t feel about AI, but I will state that while AI does have some things that are positives, as mentioned above, there are a lot of negatives, not only with it’s ability to help you on the exam, but also environmental impacts amongst other social issues as well.

Again, I’m not going to tell you what you should and shouldn’t be using, I just think it’s important to use strenuous resources like this sparingly and on an “as-needed” basis since they have a significant environmental impact.