I’m Anuruddh, and I’m building August, an LLM-powered health AI that’s aimed at bridging the global doctor-patient gap. We’ve recently topped the US Medical Licensing Examination and MultiMedQA benchmarks, scoring 94.8% in the USMLE and beating MedPaLM’s 86.6%.
Apart from being competent, we’ve designed August to be empathetic, and proactive, because we feel that’s the need of the hour in health. Since being launched last month, it’s currently helping all sort of people, from cancer patients, to new fathers to people looking for a home remedy for a cold.
It can read digital blood reports, and accepts inputs in audio and text. While it’s not designed for advanced preventative healthcare yet, would love it if you all could give it a try and drop some feedback.
August is available for free on Whatsapp at https://meetaugust.ai/wa
Feel free to drop it texts the way you’d usually text on Whatsapp. One of our core focuses is to go beyond the one message-one answer approach of current chatbots and enable a more natural texting experience.
Hey Anuruddh, Congrats for beating MedPaLM and even Hippocratic.AI.
I checked August, I have to say, it is good - there is something new compared to all the available LLMs that I have interacted, yet.
I would love to understand the backend architecture and the number of parameters,
Is it trained from scratch or you have fine-tuned it?
Great job done! I just tried using the app and it’s great. How are you monetizing the app? It’s such a wonderful app and I guess subscription service is one of the ways you can think of monetizing in the future.
Sorry, I take my words back, for now. I am not sure how can this model beat MedPaLM or even ChatGPT if it can’t differentiate basic units of Health Vitals and Nutrients.
Actually, the Vitamin D3 in that report is in nmol/L rather than ng/ml. This is one of the reasons, we at Jile Health have been careful in making our fintuned AI model public. And frankly, our use cases are for different purposes.
Please take this as feedback Anuruddh.
Hey Suman, thanks for the feedback. It definitely helps.
The report extraction has a text extraction and condensation step before details are passed to the LLM. I think that’s where this broke. Have asked the team to look into it.
We don’t pass the entire report into the core LLM to be token efficient.
Also, you can see our results from the USMLE benchmark here
This also includes the output from the core August AI engine
Thanks! Yeah, we have a few avenues in mind. While we’d want August to be free, we’re exploring some premium features as well as affiliate models for monetization
This makes sense. By the way, according to Dinesh (@Pai) extraction of text is not AI (Also, he made me sweat in 20 minutes of our conversation. And frankly, I ended up appreciating our conversation :))
Also, your doctor’s responsible for creating data to finetuned underlying model is way off from the industry standard (WHO). Or they are following another measure that I would love to know.
For example: For any CVD, if age is not defined,
- Blood Pressure above 120/80 mmHG (Max a limit of 130/80 mmHg)
- Total Cholesterol above 200 mg/dL
- LDL above 100 mg/dL
- HDL below 60 mg/dL etc are considered a potential sign of CVD
I think creating an overall score based on the different health vitals, indicators, and parameters or using Longitudinal Health Records to suggest a potential medical condition is a relatively safer use case contrary to this where end-users are exposed directly to this interface or considered as Health Companion
Was referring to OCR not being accurate to be described as AI
I hope someday table turns, and you find yourself on the builder side Dinesh, you will start calling everything AI (At least let us have small win)
I think with ChatGPT, framing questions can get you answers you would want to see
Here is a reasoning for why OCR might not be categorized as AI
Sure, I don’t think we should be including ChatGPT in the middle (It is already confused)
Human’s ability to see-recognize, see-read, learn etc. are a few fundamental components of intelligence. And based on the above, see-recognize part in machine has been categories as Computer Vision and OCR is one of the types
For example a Camera can also see but we can’t call that AI because it can’t recognize - as soon as we add an intelligence of Computer Vision, it can start recognizing and that is one form of intelligence, no?
Suman, thanks for trying August and taking the time to go into what we’ve been doing at Beyond. Taking BP as an example, WHO also specifies 140/90 as the limit for hypertension. Happy to discuss the others offline. Saw what you’re building at Jile Health, will be fun to compare notes.
OCR is definitely not considered “AI” anymore. ML is used to improve the accuracy of OCR, but would not call it AI.
I learn by doing this, Anuruddh. So, pleasure is all mine. If you think, you are right, you are right.
I debate with Dinesh because that is a way for me to be around the people I aspire to be around <3 And he brings a different perspective as well
On the building side, not building for the top 10% as Beyond is building, but our chances of getting lucky depend on learning from what is being built for the top 5% and make it available at an affordable cost hence love to exchange notes.
I’ve been texting the August app for a few weeks now. It was an interesting experience. It took 1-2 minutes to digest a 16-page complete Dexa scan report that I had recently received, but it only gave me broad information that I already knew.
I started by asking it to break down that report. It didn’t really provide answers to specific questions, BUT it did provide an option for a human to analyze.
If i did take that option - How trustworthy is the information provided by your team, and are they licensed doctors?
What level of confidentiality is maintained?
The personal tone of the texts was impressive, it even asked about my present exercise regimen and offered an enhanced training schedule!
Glad you like the experience so far.
August is designed for every day health users right now, so advanced things like going from Dexa scan to a plan can sometimes leave gaps. One of the reasons we have that human in the loop option.
The human insight is done by a licensed doctor.
We only share details relevant for the doctor to analyze. Your name, phone number are never shared. Incase the doctor feels like they should discuss something with you, you’ll get a message from August where you can set up the call and share whatever information you think is relevant.
On a separate note, I’ve been analysing Dexa scans and helping people plan for a few years now. Happy to connect and take a look at the report.