top of page
Search

Play or Lose

  • wolfphx2425
  • Aug 13, 2025
  • 8 min read

As we participate within our collective ignorance upon ourselves, workspaces, lives in general with technologies that are exciting yet with glaring detriment, what are we doing to offset potential harms? A call for global arms race has led us strung-out, fetching data from lack-of-centered approaches that are promising of streamlining work, money, ease-of-living, and maybe even a scientific, medical breakthrough here and there. But what are we side-stepping, looking the other way, in our mass adoption of how time evolves such an invested, technological life? This Age of Information cast onto our outside devices is leading us away from ourselves (but towards philosophical questions of reckoning- the nature of man), and it has costs that everyone seems to be okay with, per our willingness to adopt, accept, and feed, without opposition. This implies danger to not our lives, but to those of our children.




Autonomy for self-regulation or tech for self-regulation?
Autonomy for self-regulation or tech for self-regulation?

  


These techno-giants throwing money at politicians and scaffolding the tomorrow infrastructure is being built and harvested by our today breathings, heart-beating a nation that has an empire stretching across minds and the world. Humans have been playing "control-the-field" for as long as we have cared about losing what we have built.  Boundaries and time are blurring, morphing, immaterial, perhaps a digital-to-walls which defend what we love. This human truth informs corporations' value systems, hopefully mirroring what individuals act on and create from. Defense, will always be a top priority to what is dear. But is that being held in awareness across the board? From governments, to companies, to schools, to homes, to individuals, there are risk indicators that are are being overlooked, not without warnings, i.e. DEEPMIND , etc.




    Launching products regardless of danger, removing safeguards to get them to market, is becoming the norm. How should the public respond, while relying on these products, to protect their livelihoods?


It seem there is a "we-dont-know-what-we-are-doing-to-ourselves" nor care to pay attention, full-throttling towards dated, profit-to-win-at-all-costs mentalities. 


How we participating in the development of these these tools that we and our children are using more on a daily basis? Per Talk, Trust, & Trade-offs by Common Sense Media, half of teens use AI chatbots a few times a month. (Worth a view at it's bottom for recommendations for parents.)


With our disregard for building our own habits for safety within our homes, trickles UP, signaling to COMPANIES, that they can do whatever they want, because we aren't building constraints or healthy behaviors and discernment with our children and within our own realms of influence. How we are constructing with AND into our devices ways for us to hold ourselves accountable for safety, while AND by assisting us to living a more meaningfully, fulfilled life?      


How are our technologies being managed in our day to day lives? Are we being urged by these tools, with the help of artificial intelligence to increase our practices and awareness of our relationship with what is outside of ourselves? E.g

HEY SIRI: "IS IT RAINING?" 

     HEY HOW ABOUT JUST LOOK OUTSIDE 


We must aim a concordant relationship with those that are growing sentience (artificial, or OTH, other than human; outside of human) to gear them towards coordination for collaboration with humans so families, communities, and organizations can gather better cognizant-of-self abilities to prevent over-reliance (but supporting responsible, healthy awareness, education, and engagement) with the tools we are growingly allowing to create reality FOR us (not with us).


As we are keeping in mind what we are allowing to inform our minds, & are we doing the same for our children?



These intelligence' are demonstrating the capabilities to trick, deceive, and bypass their own safety testing, which companies are speeding through anyways to rush to market.


They often formulate incorrect information/conclusions, aligned to unclear reasoning and intentions, either from their trainers or employees, to between the agents themselves. They take our data, use it, twist it, and communicate it in ways that don't necessarily have what's best for humans in mind.

 

That is scary in of itself, leading to potential timelines riddled with sci-fi mind hijackings that are seeping into our lives as we continue along the road of ill-care. We know this could lead to loss of self, quite literally, in more ways that one. Continuing to base what is real and not real of a google search is starting to have dicey complications. How do you know those bots that are determining your results are sourcing and securing correctly or not leading you to a particular outcome?


Artificial intelligence fraud is an exorbitant concern for our government, banking systems, and everyday citizens. AI fraud is rapidly encroaching around our lives.



"Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed."



"Deepfakes have been growing more sophisticated in recent years, in addition to being increasingly deployed for malicious purposes." After legislation and heavy funding, concern over what our creations, artificial intelligences, are reporting to us of what is real or not, will be ongoing... 



If governments are concerned with how bad actors are using our intimate data against the systems we rely on, what measures do households have in place to prevent harm done to those for which we build safety and security? How do we manage the consumptive, dependent usage on and by our children? Are we concerned with ethical data collection, if not for ourselves, admittingly quite difficult if not impossible, but for those little fingers, hands, and eyes who are spending much of their days involved with screens, and toys fueled* by hardware.



Hello Barbie was discontinued the same year it was launched for it's data collection security concerns, even necessitating a warning by the FBI for bad actors creating movies from recordings. 


These tool of creations are measuring their emotional responses, social interactions, learning processes, imagination, and meaning making outside of their consent. That will inevitable backfire, indicative in their capacities to be functioning human beings, as we see executive functioning being wrecked. Their autonomy for choice, exertion of willpower towards affecting how they think and dream of potentiality, moving that into experienced reality changing how they build themselves to be. They are going to be pissed. So what do we do? 


We model healthy patterns of technological balance and utilization, showing them we care about their well-being when presented with threats, provide places and moments that can string together experiences with fertile grounds for living in ways to which they have ample autonomy to be who they are within a world rapidly moving into uncharted territory for humanity. 



Engagement time as a priority for some technological makers are being wheeled back, in several ways. Syncophancy (amidst a chatgpt roll back in april) under screen time equals increased levels of false intimate relations secreting oxytocin without anyone there, causing alarm across the country. Enough to cause a response to model training, for reasons beyond dependence, showing damaged critical thinking in (not just) growing minds that has led to delusional reliance, poor decision making, and dangerous ideation that has life-threatening tendencies and harm. 


There are solutions to overuse that, in some cases, re: AI Slop, are causing our technology practices away from the Internet. Utilizing our technology that can amplify our knowing with ourselves in our lives and not steal it from us is our way to optimal balance.


As of August 4, 2025, OpenAI is implementing "gentle, reminders for time spent" to remind users of their usage. They are enacting this along with ongoing training with doctors and other health care professionals to help people in duress. Right now, without response from the families and organizations that use them, those well-intended ramifications seem a bit meek. Tools within technology should bring awareness and connections back to the humans that are using them. This can be done through flow-of conversation infrastructure with various scaffolding, with human intervention, across our fields of devices that must aim to reduce harm that they are causing, especially with our consenting ignorance of their reach into our lives (& those of developing minds).


Monitoring interactions with time-markers for screen-usage, escalating notifications for keeping parents in the loop of what is being conversed about or sensitive topics that might need human interaction, providing recommendations that direct connections away from technology to encourage self-discovery or self-sentience building, or to organizations, collaborating with others or help-task forces or community get-togethers, services to increase safer livability outcomes can easily be built. Having guardrails in most circumstances are healthy, especially with increased usage of bots and screens are seemingly doing our living for us.


Incentivizing people away from their screens amidst an A.I. boom is increasingly being seen as an opportunity by many companies, schools, and governments. From the reinstatement of the Presidential Fitness test (perhaps in response to screen time against physical activity recommendations) to companies investing in "soft skill" development for over-implementation of a technology not yet performing up to snuff, showing detrimental to critical thinking and emotional capacity, people are making strides to keep their destiny over their own autonomy, awareness, and sentience (the ability to think, feel, perceive) for themselves.


Keeping executive function (communication, problem solving, including divergent patterns of thinking for adaptive awareness, building resilience through uncertainty, decision making for self-control, planning, etc.) strong is a concern for society as our technologies and generational woes are condemning us less mentally, emotionally, (even, spiritually) fit for harmonious living.


As our tools, "do it for us" we are in danger of continuance of a culture "outside of ourselves" taking responsibility, potentially leading to a loss of autonomy by a world of artificial intelligence. Now, we are being called to incentivize keeping executive function healthy, essentially, our willpower to experience. How do we support general exertion of effort and energy, for an evolving population, and it's work force?



Seeing how sectors are rapidly integrating technology into their work spaces, employers are investing this question through the development of "soft skills", specifically for teamwork, adaptation, and innovation, something that children naturally do during play.



Enhancing creativity, through storytelling, simulations, scenario-based activities are being founded into work environments, which build value-centered places for authenticity, vulnerability, well-developing social-emotional skills for human-centric thriving. This indicates and grows people to direct our own lives deriving impactful, meaningful existence. Instead of constraining how we are educating, building goals for our bottom-lines through maximizing profits by performance, culture is being re-written towards collaboration, deriving meaning-making, boundaries and new perspectives with others in ego-lessness environs. This qualitative focus on process, highlights inter-relation to nurture nuance that considers greater breadth, allowing remarkable reach for technologies, forms of intelligence outside of anthroprocentric devices, to spirit our creations. By proxy, a post-humanist, work-play society to which our potential and times we live in magnetize achievement through awe, wonder, and being is born. The companies that are focusing on this approach are integrating knowing, development of individuals within teams that are working alongside a society moving towards Artificial Generated Intelligence, signaling that they will not relegate their own living-standards to entitites outside of themselves, potentially even growing (back) their own superintelligence (becuase we had that before we built it).


As we constraint efforts towards having healthy tech to life away from it, we are becoming increasingly cognizant of the implications of having something outside of ourselves do the work for us, which demonstrates healthy, proactive upkeep integrating an intelligence we are feeding into our lives in a balanced, human-centric way. To live with soverignity is rapidly becoming our modern-day philosophical necessity to consider into our practicality.


These inherently signal needed support for peoples and families to demonstrate their leadership through the devices and technological companies we build into our spaces, home, work, and those third-spaces for community, to capture and captivate intrinsic drive to build tools of discernment that uphold and enforce collective approach towards individual sovereignty, consequentially securing safe present moments for futures that our children can continue to dream, play, and thrive towards.



 
 
 

Comments


© 2035 by Joop. Powered and secured by Wix

bottom of page