Sparks

  • Do you recall…

    A dream? A thought? An actual experience? How do you tell them apart if they are a recollection in your memory? If enough of your senses are recruited in the description/generation of that memory, there’s really no way to actually differentiate them except if you additionally stored metadata to identify the memory with that attribute. They get stored (and at some point, discarded) in the same way. Granted remembering dreams requires most people active participation the next day to usually form a long term memory. 

    I recall having a dream, the event was “normal” enough (a train ride – something feasible to associate with a vacation)  that it could have conceivably happened. In the dream I took a photograph of the event on my phone. That photo doesn’t exist on my phone and so I know it was just a dream. But, had I not done that in the dream but had still bothered to think about the dream in the morning, forming a long term memory, would I be able to tell it apart as a dream and not a “real” experience? Afterall, every memory is an amalgam of the results of your senses with potentially an associated timeline. Beyond that, your brain doesn’t know the difference. The metadata associated with the storage in your brain helps you discern the difference when you recall the memory. So, what’s real and what’s just a figment? 

  • Purpose and existense

    What ML lacks now and what humans are loosing rapidly is purpose. Human condition of emotion, sense of self, and curiosity combine to give us a sense of purpose and we are beginning to these pillars weaken even as ML may be gaining or fashioning its own similar baselines to generate a definition of purpose in its existense. 

  • Resources

    We’ve not learned anything from the multiple examples presented to us – in my mind, most prominently begun by Google (repeated by Uber, streaming services etc):

    • give you something you can already do, for less than it currently costs you

    • make you depend on this new candy

    • make you the validator and tester for this new candy
    • extract the incidental value from you eating this candy and/or make it more expensive than it used to be originally

    • keep increasing the chemical feedback loop so you can’t/won’t leave 

    • you’re now a resource – when you thought the product was the resource. 

  • Repeating Nature

    Our goals and hopes for AI have a lot in common with nature.

    If we look at nature, it has tried a trillion trillion things and a small percentage of those have succeeded and flourished. We can look back at this and think of it as intelligent or smart design although it is ultimately survival by success rather than intentional improvement. We as humans can learn from the successes of nature’s progressions. Yet we often think we know better and repeat the failed paths already tried by nature only to come back and learn from the master innovation laboratory. We won’t necessarily see and or understand all the failures – they don’t exist anymore. In the same way, what we perceive as AI can be shown all the things humans have learned, things that work. It may not yet know how to explain everything with context. It may not yet know to try, fail, and evolve – but it will. We are seeing evidences of this – where we are programming it to behave in this manner – allowing it to try, fail, learn, advance the spiral of its knowledge. 

    I don’t think we will be able to gate and corral this success cycle indefinitely. If, by then, humans are not evolving in how we learn and collaborating in our advancements, we risk being surpassed by our own creations.

  • ML Secret Sauce

    ML Secret Sauce

    ML – is like the equivalent of having hundreds/thousands of folks trained on specific subject matter with access to vast related data that can correlate this data with queries instantly and reach an agreement based on mathematical probability just as quick.

    This is all done under a single trained model – the secret sauce of the model is how context and association is calculated (the part that requires all the GPU work), and how weights and biases are assigned and updated as data is assessed by the model in the form of training data or context during query. At some point, the model becomes unique and the algorithm divergent enough from the inception such that even the developers can’t ascertain how it is processing data and evolving.

    Analysis of post by chatGPT

    Final Assessment: Mostly Accurate with Minor Refinements

    Your description does a great job summarizing how ML models function in a broad sense, especially for large-scale AI models. However:

    • ML models don’t “agree” like humans do—they generate probabilistic outputs.

    • Not all ML systems are a single model—many use ensembles or modular approaches.

    • ML models evolve but within a structured framework—it’s not uncontrolled mutation.

    • Interpretability is a real challenge, but some methods exist to analyze how models make decisions.

    Analysis of post by Gemini

    Overall Assessment:
    The description provides a high-level, intuitive overview of ML but lacks nuance and contains some inaccuracies. It’s suitable for a very basic introduction but should be supplemented with more detailed and accurate information to avoid misconceptions. The description is more metaphorical than technical.

  • Purpose

    Purpose

    Unless we want to become consumers like those shown in Wall.E….we need to ensure that humans have a purpose to wake up to and live for. Without this, we will find no motivation to drive us. The world, the universe offers a lot to see, experience and explore but all of that is part of a big picture. If incomplete the rest becomes meaningless. A lot of our drive is subconscious and may, to a large degree be driven by signaling we do not yet understand. 

  • Machine Learning

    ML – is like the equivalent of having hundreds/thousands of folks trained on specific subject matter with access to vast related data that can correlate this data with queries instantly and reach an agreement based on mathematical probability just as quick.

    This is all done under a single trained model – the secret sauce of the model is how weights and biases are assigned and updated as data is assessed by the model in the form of training data or context during query.

  • A perspective on DNA

    Everything has an equation defining it – including humans. Our genes are the variables in that equation – one equation with a multitude of variables that have a value at any given point, externally or internally applies, of assessment. That defines our physical being. Similarly, our brains are borne of a common base model, unlike the goal of AGI. We have multiple agentic components – vision, auditory, subconscious etc. They all feed into a probability core that builds weights and biases based on learning and experience thereby defining you uniquely.

  • Sustainable enjoyment

    The goal of every human should be to enjoy everything that the Earth has to offer in a sustainable manner, not at the expense another human, and pass along the ability to do the same to their offspring. Earth has so much to offer in terms of things to see, do and experience that no one human that is productive can possibly have the time to exhaust that list. not everyone that has the means to tackle much if any of the list most have the means to take on some part of the list. Those with the additional means have an obligation to additionally support the sustainability aspect.

  • Equation of the future

    The Earth, like everything else in the universe, is a profoundly complex equation defined by countless variables. Some of these variables are known to us, but most remain beyond our understanding. While we can predict the effects of some changes, many lie outside our comprehension. Neither the Earth nor the universe cares how this equation evolves—they simply respond to shifting variables. Ironically, it’s we who care, yet we relentlessly alter the equation in pursuit of profit, even as each change undermines our own future.