General Discussion
In reply to the discussion: Remember When Wikipedia Was Considered Suspect? [View all]JCMach1
(29,070 posts)What you get out of it is in large part dependent on what you put into it (prompting).
AI right now is moving exponentially quickly. If you tried ChatGPT once 8 months ago, things have changed massively since then.
For my use cases I use Gemini 2.5 PRO and several other models regularly. When I am working on a research project, NotebookLM is my goto.
I will give you the test I used to understand the usefulness of NotebookLM. First, uploaded . PDF's of the theoretical material I was working with (Foucault). I also uploaded a previous paper that used the same theoretical framework I wanted to use. Then,I uploaded a copy of "Romeo and Juliet". I asked the model to apply the theory to the text and search the text for examples that would prove my theory about Foucauldian Epistemics in Shakespeare's R&J. Within about 2m, it had zeroed on those pieces of the text which were exactly on point to what I wanted to prove in my thesis. It literally would have saved me about a week of work.
What AI isn't, a 100% solution to whatever your use case is?
What it is? An extremely fast and efficient tool.
Now a non-academic example: in TX, we can appeal our property taxes. AI put together my appeal from the documents I uploaded, plus did research on Zillow, other real estate websites, and even the official county tax record database. It then did cost and square footage analysis on recent sales and extrapolated trends in my neighborhood concerning home prices. It did the work people are paying for online (that costs $500+) for my subscription and about 5m of my time. It was thorough as hell and accurate.
However, if you are using an LLM you won't get a result of you don't know how to prompt for it.
LLM's are a lot like a next level meta programming language.