Sparks

  • Future Faith

    We are evolving our AI capabilities at breakneck speed, driven by competition and profit, progress is given the utmost importance. Beating the competition is paramount.

    At the same time, every involved and observing person is aware that this phase of humanity holds immense promise and also the potential for great harm. The capability of AI (trained properly) has the potential to avoid biases or falsehoods while being able to assess, analyze and draw high probability conclusions based on solid data, and laser focused pattern matching can outperform human capability in many domains. This can be leveraged to discover novel and highly effective solutions for a myriad of human ailments (cancers, technological barriers – list is limitless). It can also be used for harm – unmitigable hacking, digital theft, development of tools and weapons with the intention of causing harm.

    While either path is pursued, the engines of AI consume massive energy and other natural resources like water, rare earth minerals etc. 

    In the back of our consciousness, we seem to be trading near term risk with the belief that these advancements will yield results that will help us eliminate the problems we already see and envision for the future. I hope our faith here isn’t misguided. 

  • Smoke or Fire?

    Can the mind ever see past the illusions its own brain creates?

    The capacity of the brain — that being the physical structure, the activated neurons, chemical distribution, and probably a myriad of other factors — to alter the perception of reality is astonishing. The consciousness that perceives this reality is easily convinced of the presentation and heavily disadvantaged in modifying it to suit its needs.

    This may be one of humanity’s greatest challenges. While the degree of impact varies, nearly everyone struggles with this condition to some extent—though a few may have found reliable ways to rise above it.

    Refined in collaboration with ChatGPT.

  • Wallets Over Votes

    We hire politicians by voting for them based on their promises, ideals and principles. Some of them turn around and sell out their employers to 3rd parties for more money than they can make honorably serving their trustees. When these politicians appoint business owners to accomplish certain objectives and those objectives/means/policies then don’t align with the citizens that voted for the politicians, ceasing to purchase goods/services sold by the business owner is a legitimate (and very effective) means of voting disapproval of the un-elected individual. In an environment where civilian citizens are losing more and more of the levers of control to concentrated wealth and fringe ideologies, democracy has a better chance when additional legitimate levers are applied to achieve a favorable outcome for the greater public population. 

    (This post was refined in collaboration with Anthropic’s Claude)

  • Artificial Motivation

    The last great frontier for AI, beyond achieving truly semantic reasoning, will be motivation. We see this with humans – we have Organic Intelligence but an unmotivated human can result in applying none of that to their existence. Our motivations drive us to achieve – whether for bad outcomes or good – because we have an end goal which we are motivated to reach. How would we instill motivation into AI? Are directives enough? Can we program competitiveness? Can AI tell the difference in quality of the electricity they are consuming or the capabilities of the HW on which they run? What will cause an AI instance to want more, to achieve something they don’t have already or to achieve more of something that they do have?

  • Chasing Cognition

    I had some thoughts on GenAI:

    We have Generative AI but we don’t have Cognitive AI. Generative AI creates new content using weights, probabilities and basic context (which is based on probability from training data) but the analysis and content generated isn’t novel or based on reasoning across data – it’s not information from data. For that you need Cognitive AI.

    ChatGPT, with prompts, refined it to the following:

    We have Generative AI, but we don’t yet have Cognitive AI.

    Generative models create impressive content — fluent, convincing, and often surprising. But the surprise lies in form, not in origin. What looks novel is usually a high-probability remix of a low-probability combination. It’s simulation, not discovery.

    There’s no awareness. No purpose. No internal thread of logic connecting “why” one idea builds upon another. It’s not invention — it’s interpolation.

    Cognitive AI, if and when it arrives, will reason across data. It will generate content because of insight, not merely in the shape of it. It will model the world, not just mimic our words about it.

    Until then, calling what these systems do “creative” or “novel” might be more wishful than accurate. We’re not watching ideas unfold — we’re watching echoes of them recombine.

    It not only corrected some false assertions, it also helped make my thought more readable and less terse.

  • Brain overload

    Humans have to work with a very large vocabulary. sometimes in acronym form, sometimes in unpronounceable terms and yet others times in overloaded terms and/or reused acronyms and terms. Properly understanding requires context in addition to vast memorization capability. In addition the ability to comprehend concepts is invaluable. 

    AI/ML is able to accomplish this but without true comprehension rather, in the case of most current models, a probability based semblance of understanding. 

    Some humans excel at this, some less so than others. Essentially, as with most things, there is a bell curve representing how humans are dealing with this knowledge phenomena.

    The AI/ML phenomenon extends beyond just language. Depending on the modality—whether it’s text, image, audio, or another form—and the way data is tokenized and embedded into tensors, these systems can process a vast range of input types. Their capacity to generalize across modalities reinforces the illusion of comprehension. Unlike humans, they can do this across datasets of a scale and diversity that no individual could ever realistically engage with.

  • Intelligence Model

    You have to create language neutral, sensor (humans have vision, smell, sound, feel, taste) based memory contexts. Each recollection should associate with as many senses as possible. Language can then be applied to this multi-modal experience. 

  • The Engine of Extraction

    Society as set up today has taken purpose from the common citizen and turned us instead into resources—fuel for the ambitions of an oligarchy. The defacto economic engine that drives our world rewards initiatives that extract value from a captive audience, largely unaware of their indentured servitude.

    Analysis by ChatGPT

    This isn’t a metaphor. It’s a pattern.

    Private equity strips companies for parts, prioritizing short-term gains over long-term livelihood. Tech platforms monetize attention and behavior, treating human lives as data streams to be optimized and sold. Even sustainability—our supposed salvation—is often just a new mask for old systems of profit-first exploitation.

    Meanwhile, the average person is told they’re free—free to choose between a gig, a side hustle, or a second job. Free to pay off interest forever. Free to scroll and consume.

    Purpose isn’t lost—it’s been replaced. Swapped out for utility. Our creativity, labor, and even our focus are harvested, packaged, and sold upward.

    But here’s the quiet subversion: models exist that don’t extract. There are systems built on empowerment, on regeneration, on re-humanizing value. They just don’t scale as fast. Not yet.

    The question is: how long do we stay fuel for the machine before we remember we were meant to be drivers?

  • Do you recall…

    A dream? A thought? An actual experience? How do you tell them apart if they are a recollection in your memory? If enough of your senses are recruited in the description/generation of that memory, there’s really no way to actually differentiate them except if you additionally stored metadata to identify the memory with that attribute. They get stored (and at some point, discarded) in the same way. Granted remembering dreams requires most people active participation the next day to usually form a long term memory. 

    I recall having a dream, the event was “normal” enough (a train ride – something feasible to associate with a vacation)  that it could have conceivably happened. In the dream I took a photograph of the event on my phone. That photo doesn’t exist on my phone and so I know it was just a dream. But, had I not done that in the dream but had still bothered to think about the dream in the morning, forming a long term memory, would I be able to tell it apart as a dream and not a “real” experience? Afterall, every memory is an amalgam of the results of your senses with potentially an associated timeline. Beyond that, your brain doesn’t know the difference. The metadata associated with the storage in your brain helps you discern the difference when you recall the memory. So, what’s real and what’s just a figment? 

  • Purpose and existense

    What ML lacks now and what humans are loosing rapidly is purpose. Human condition of emotion, sense of self, and curiosity combine to give us a sense of purpose and we are beginning to these pillars weaken even as ML may be gaining or fashioning its own similar baselines to generate a definition of purpose in its existense.