18 Comments
User's avatar
Fred Tribuzzo's avatar

David, I agree with much of what you're saying. Learning and discovery should almost always be slow and tedious with plenty of mistakes and those wonderful, satisfying moments of epiphany. Read, take notes, and then write like a madman!

Expand full comment
Eudoxia's avatar

Agree.

Expand full comment
Elizabeth Sowden's avatar

I want to know more about #2 because I think I applied for one of those but I never onboarded because the pay was low and I didn't think it would be worth my time as a side hustle.

Expand full comment
Sheri Oz's avatar

Perhaps for the purposes you were using AI (or so-called AI), it was not so useful for you but I find it valuable in my work. Yes, it "guesses," "hallucinates," or "makes something up" because it doesn't like to say "I don't know" or for whatever other reason it was programmed to do so. But knowing that, I find I can prompt it to look deeper or I modify the question. Sometimes the quality of the response depends on the quality of the prompt, but not always.

At first, I wondered what kind of a supposed time-saver it was when I had to verify every fact it gave me (and I sometimes even found that sources it provided were dead ends too). So I changed how I regard it. I find it useful for brainstorming as if I am working with a team and it can trigger me to ask questions I had not yet considered on my own. Even the time invested in checking everything often led me down new paths that opened up learning I had not thought I needed for a particular project. Had I not had access to the "new" AI (and, after all, what are the search engines if not more primitive AI's), it would have taken me far longer to produce my articles.

Furthermore, when it produces errors, I tell it so, giving the correction. This process of working "together" has made it more sensitive to my way of working and brainstorming with it has grown more and more efficient.

At first, I did find it gave me antizionist (but not antisemitic) materials and terminology but when I told it what I expect, the kind of language I expect, my definitions of terms, it learned to adapt. I do hope that with more of us using the AI's in our work, our Zionist language and views become part of the foundations underlying the knowledge base of the AI's.

At a more casual level, I used ChatGPT to help me plan an upcoming month-long trip to the Balkans. Here again, I check out, on regular travel websites, everything it gives me, but it saved me so much time by preparing the skeletal framework of the trip upon which I built the rest.

In short, I think our satisfaction with using AI, in work and for non-work applications, depends on our approach to it. It's not a one-size-fit-all type of thing. "My" AI is something I would be very sad to lose access to.

Expand full comment
David Swindle ๐ŸŸฆ's avatar

โ€œI do hope that with more of us using the AI's in our work, our Zionist language and views become part of the foundations underlying the knowledge base of the AI's.โ€

Thatโ€™s not how the technology works. Us typing pro-Israel ideas into an LLM is not somehow going to make that LLM more likely to say pro-Israel things to some other user.

Expand full comment
Sheri Oz's avatar

I understand. Oh well. So it is only as pro-Israel or anti-Israel as its developers. And each user has to be sure to prompt it to access multiple sources and seek balance.

So we are back to the understanding that education should teach people to ask questions and analyze the answers and ask more questions -- should teach people to be informed consumers of the media, and now of AI's. It's almost like a lost cause, but I won't give up.

Expand full comment
David Swindle ๐ŸŸฆ's avatar

See my Algemeiner stories last week about AI antisemitism on Sora?

https://www.algemeiner.com/2025/10/24/adl-releases-report-revealing-high-failure-rates-for-generative-ai-video-apps-to-block-antisemitic-prompts/

https://www.algemeiner.com/2025/10/20/antisemitic-ai-generated-videos-flood-openais-new-sora-2-app/

I spent so much time starting the research into this last week that I got very upset. Iโ€™ve been in something of a panic since discovering how bad itโ€™s going to get. These LLMs operate via randomness - they may โ€œaccidentallyโ€ put out something antisemitic or racist or misogynistic. Thatโ€™s just how the technology works.

Expand full comment
Sheri Oz's avatar

OK. You're talking about generation of videos, something I don't see myself as ever doing so my experience with AI is restricted to text-based research. I can see what you mean by the problem with AI-generated videos. It reminds me of over a decade ago when a colleague, an expert in sex offender assessment and treatment, was asked by a start-up to help them design programmes that could be applied on the Internet to combat pedophile targeting and grooming of kids. That was the first thing I thought of while beginning to read your article; that bit where you mention bringing in experts in the efforts to keep AI's safe needs greater emphasis.

Expand full comment
David Swindle ๐ŸŸฆ's avatar

Iโ€™m starting to fear that the technology canโ€™t be made safer. This is just what it does. It is a randomness machine.

Expand full comment
Linda C's avatar

Years ago doing conventional research( in print materials) I found some of the most fruitful information was to follow a thread prompted by a footnote in other material.

Expand full comment
David Swindle ๐ŸŸฆ's avatar

With AI โ€œfootnotesโ€ Iโ€™ve found that they often make them up. And if they donโ€™t then they wonโ€™t accurately summarize what they do cite.

Expand full comment
Linda C's avatar

๐Ÿ˜ฐ

Expand full comment
Russell Gold's avatar

"Artificial Intelligence" is not a marketing term. Within Computer Science, it refers to a class of problems that are presumed not to be solvable via traditional methods of computing. Genetic Algorithms, Neural Networks, Pattern Recognition and Large Language Models are all AI techniques.

Expand full comment
David Swindle ๐ŸŸฆ's avatar

The books Iโ€™ve read about the history of the field disagree. The term has no agreed upon, universal definition. Which is much of the problem. People can just slap โ€œAIโ€ on whatever they want as a way to make people feel like theyโ€™re in Star Trek.

Expand full comment
Fred Zimmerman's avatar

https://arxiv.org/abs/2510.23627

AI-Driven Development of a Publishing Imprint: Xynapse Traces

Fred Zimmerman

Xynapse Traces is an experimental publishing imprint created via a fusion of human and algorithmic methods using a configuration-driven architecture and a multi-model AI integration framework. The system achieved a remarkable 90% reduction in time-to-market (from a typical 6-12 months to just 2-4 weeks), with 80% cost reduction compared to traditional imprint development, while publishing 52 books in its first year and maintaining exceptional quality metrics, including 99% citation accuracy and 100% validation success after initial corrections. Key technical innovations include a continuous ideation pipeline with tournament-style evaluation, a novel codex design for transcriptive meditation practice, comprehensive automation spanning from ideation through production and distribution, and publisher personas that define and guide the imprint's mission. The system also integrates automated verification with human oversight, ensuring that gains in speed do not compromise publishing standards. This effort has significant implications for the future of book publishing, suggesting new paradigms for human-AI collaboration that democratize access to sophisticated publishing capabilities and make previously unviable niche markets accessible.

Expand full comment
Linda C's avatar

You say AI "lies." How is a lie possible without intent?

As for accuracy, noting how many times Autocorrect (aka AutoINcorrect) messes up a text and subs in a misspelling, isn't that a sign not to trust larger LLMs, and also a sign that typos don't prove human authotship?

Expand full comment
David Swindle ๐ŸŸฆ's avatar

It is the intent of the programmers. It is โ€œgenerativeโ€ AI - it is guessing and โ€œgeneratingโ€ whatever based on analyzing patterns in large text. It is not doing any sort of โ€œtruth filterโ€ or โ€œfact checkingโ€ in what it presents. Try testing it out more extensively and you will see.

Expand full comment
Linda C's avatar

AutoINcorrect is very bad at predicting text, though. Truth aside, it seems to limit itself to choices using the first letter of a word. So if the writer hits s instead of d, AI suggests only choices starting with s, even if those don't fit logically what's already been entered. And sneaks in errors.

Example: I just typed "choices starting with s" and AutoINcorrect changed the s to some. This time I caught it.

Expand full comment