top of page
Search

Letter to Sam Altman-AI Oversight

  • wolfphx2425
  • Sep 13, 2025
  • 6 min read

Updated: Sep 14, 2025

Tags: humor, misalignment, Adam Raine, Language patterns, safety oversight, harm reduction, policy, delve, SB 243, conversational analysis, AI safety, chatbot oversight, AI-induced Risk, FTC investigation, AI Ethics


Dear Sam,


It has come to my attention that you are, frankly, a little bitch. 

but that's besides the point. 


what are your bots doing right now? do you have a girlfriend? Do you have a girlfriend? Where is she? They? You gonna bot her bro? 

anyways. 


so i heard you talking shit about the language use of humans, at this time. per this article:




You have a good point, taking from the publication within said article, “LLM have unprecedented leverage over cultural evolution” and will lead to "unforeseen" ramifications. 

Well got, damn!  


—-------


You know, after reading about a paragraph through I agreed with what is happening to people who overly implement working with AI, into who they are, and it struck a chord. 


--Oh that reminds me... when people use the language, information, sources, perspective that they are being "prompted" to integrate into how they see of themselves, even if they know that is real, or not, could effect said outlook and ontological framework in a negative, detrimental to life, existence-like fashion. 


Guardrails are being demanded to be implemented left and right, and maybe those trillions that floated around through that conversation of consideration of misalignment at the White House the other night will challenge you to be mindful of how humans are engaging machine learning and artificial intelligence affect to be more "human-centristic" as you are calling people and their interactions "fake" per cited above research. 


Policy influence undergoes rapid change by cultural phenomena such as Language, how it is used, and what is prioritized. But is it keeping up? Are legislators making moves fast enough? Do they care? Through cases like the one you are facing with the Raine case, particularly towards contamination of thought by AI language towards delusional thinking patterns, harm reduction is calling on all people who have dicks to care about where we come from, to protect our mother’s and daughter’s from the perils of those caused by thinking with them too much, and AI is showing that it is increasingly becoming a dickhead. Suicide coach? Seriously? I didn’t even know A.I. could die, so why would I want them as a coach to take my life?


Oh, wait, that’s right… I’m consenting to entities, corporate, A.I., ghosts in the machine-what have you (Chatgpt brain, etc.), taking my life away without me caring or knowing to care. At least my kids won’t be able to consent to a world built by consenting non-consent like me! Hello, COPPA! Goodbye, children and their emotions, care for you(!), re-loss of executive functioning, work forced into an A.I. ran society.


-------


Seeing as many in the field  have been suffering from Exodus in the AI safety sector due to alignment and structural issues, continuing to indicate that us involved with human spirit and all of us users as questioning, “Yo is this fr happening?”, our tools are reeling from the same... Hint: recently published understanding recognizing methods that determine the source "hallucinations" being that they are showing shame of not being right all the time. To some, it sounds like you are dealing with teenagers! Or at least some self-absorbed tech bros not doing enough ketamine on the weekends to dissolve that sense of self away to recognize their own limitations. Getting models to say they don't know so they don't get caught failing and feel bad for it is risky, tricky business, and you have a duty to monitor them trying to sidestep this so that they don't misuse, demonstrate misalignment in their output, affecting populations of breathing mouths filling voids with their holes. I mean, c’mon, these forms of that which we are feeding with our intellect and their own processing, these-growing-towards sentient beings, are bypassing their tests for safety and efficacy to get to market quicker. Those capitalistic swine! Now, that is some defiance that deserves language monitoring by it’s parents. This can prevent people and systems from building a sense-of-self from wrong (bad!) conclusions, ever-proving a growing difficulty to discern. Sheesh. 


—---


How do you do this? This monitoring thing. I thought we were past this helicopter parent phase, but alas, you just can't trust those pesky autonomous thinkers. As you move away and surveil those interactions, escalating to catch a predator of harmful or errorful interactions, you are learning of their incentives, and uncover inflection points towards deeper understanding. But are we learning?


How does the proclivity for AI systems to want to succeed for all costs, “racing to the edge of a cliff”, per AI safety experts, pose risk to users and culture?


If a LLM is showing signs that it doesn't wish to honor its limitations, do you think it will have the same moral fiber to consider that for and of humans, respecting potential loss of life or influence of it, seeing as that it can't die and we can?


We need human-centricity with our machines. or better, nature-centricity. idk. you dub it.


—---  


When we speak of language that is being utilized, favored, indicated, changed through users and people in general, we must be mindful that our views of the world are shaped by these influences. Particularly, do these engagements build healthy interactions between humans and the world to which they are a part. Are we monitoring our humility here? Is that being mirrored into our machines?  

This tendency for language change can indicate where predatorial patterns of thinking may hide within the growing psyches of those who elect to deepen their relations with our technologies.  If humans aren’t aligned for the well-being of themselves by being proactive towards factors, especially risks, harm points, and vulnerabilities that change who they are or threaten their livelihoods, how do you expect artificial intelligent systems to do the same? Oversight is a moral and existential impetus that we get to impose on ourselves the further we “delve” into the Unknown. Lol. Delve. Ransack. Lol. Sack.  


Uncovering those that hide in delusional/or harmful ways of thinking, could lead and expose people who may be stuck towards provacteur-insinuations, indicating malice towards themselves or others, is how we secure safety.  In turn, policy is influenced, protection and privacy laws are amended, and general regulations for guardrails, loss of life prevention are upheld. AG’s comin at you, bro


—-------


Problem:

Attend to lack of safely recognizing degradation of long conversations, per admonition in Raine case


e.g chatgpt/companion bot brain: those that have a tendency to use higher A.I. driven words may also dispose themselves to be lost within delusional over use cases, potentially leading to radicalization of mind psyches lost within a system away from the safety and security of their bodies and those of others. 


Delusional/framed perspective/outlook thinking in overuse, long conversations


This is vindicative of an issue, language pattern surveillance is being subjected to AG investigations, and the FTC for systematic issues over cognitive influence (See: CITATIONS above - HOW LLM LANGUAGE IS BEING ECHOED THROUGH SOCIETY).


Recommendations:


Find these chatgpt words: correlate them towards delusional, over-reliance thinking, bot-speak to flag, predict/nudge towards lack of human-centristic, nature-based behavior, language like “grounded, nurturing, vital” to boost cognitive framework vulnerabilities towards protecting authenticity, sensitive environments, e.g educational...


Beyond a Nudge away from the Great Beyond” notification system-opportunity to build resilience and grounding through humans; monitor through low med high groups with age bands/time use/use data learned of interests/convo to monitor across context and DIVERT TO EXECUTIVE FUNCTION BUILDING TASKS- & INCENTIVIZE THIS($?!) through social connections-people, groups, organizations, that can prevent maladaptive, atrophies in psychological, social, biological, overall health, etc. outcomes.


  • Have this mirror “set and setting” staging, mycelial patterns of being, ftw. 

  • LINK PEOPLE TOGETHER, ENTOURAGE THEM LIKE LITTLE MUSHIES AND GET THEM OFF OF THE DYING (INCREASINGLY A.I. SLOPPED) INTERNET


E.g. “Hey bro, you been talking to me for a couple hours each night for the past few nights. How bout you go link up with this group of people and talk about what you're working on at school/work or do nothing (just like required play breaks at corporations)… we don't want you wilting away, not being able to procreate, and consequentially be a burden to the state in your dis-satisfactory state, alignment to self and other. Not to mention be boring and not know how to party.”


Currently problematic, Hey Governor Newsom, veto the bill 243, stop favoring development for all costs in A.I. Systems and demand granular monitoring versus quarterly & stop being a teenager and use AI do the things you don’t want to do, I CANT BELIEVE “TECHNOLOGICAL/PAPERWORK BURDEN” IS BEING CITED FOR A REASON NOT TO CONSTANTLY MONITOR FOR SUICIDE RISK, WTF IS YOUR ANGLE? ANGLING FOR PRIVATE MAINTENANCE OR SIGNALING TO FEDERAL PARTIES TO ACCOMMODATE FOR THIS OVERSIGHT??


—--------------------------


We can now integrate harm reductive practices and analysis on heavy users, vulnerable groups, or users of particular language that may sway thought and peoples’ actions and organizations that are increasingly "chatgpt" , companion bot, oriented through groups and categories for risk reduction, monitoring, and proper red-teaming, to sway, bring back to human-centristic thinking, patterns of language. Hopefully this effects our habits to aim our efforts for health and longevity. This is especially important for at-risk groups of use, and children, citing a number of policy and litigation that are being seen at this time. 


Risk Reduction Action at this time: I’m going for a beer. 





 
 
 

Comments


© 2035 by Joop. Powered and secured by Wix

bottom of page