AI Alignment: A Red Herring that Can’t Work…


Wait… what??

In our ongoing discourse on AI alignment and intergenerational responsibility, we stand at a pivotal juncture, confronting cognitive tendencies and their societal ramifications.

From a neurodivergent perspective, there is a pattern of deferred consequences — a temporal convergence where effects are not resolved but merely postponed, often burdening future generations.

Our willingness to burden our childrens’ and successive generations without so much as an apology and an attempt to mitigate their suffering our predilections betrays our imperfect understanding of the concept of ‘love.’

This pattern extends beyond AI to the economic strategies employed by governments and central banks. Celebrated as solutions, these strategies are, in reality, burdens passed down, with the decision-makers seldom facing the repercussions.

It’s a cycle of attribution error, where the prosperity of one era is credited to individual actions, while the struggles of the next are viewed as personal failures, disregarding the role of circumstances.

The frustration stemming from this disconnect is profound, especially when it manifests as judgment from previous generations. They seem to overlook the complexities we face, expecting us to not only compensate for their deferred challenges but also to thrive despite them.

My reflections on this topic stem from a belief in the power of holistic thinking — viewing complex dynamics as systems of inequalities rather than equations. It’s a belief that, like karma, consequences cannot be avoided, only delayed, with each deferral amplifying the eventual impact.

As we grapple with these issues, I propose a shift in perspective. Rather than assigning blame, we should embrace a collective mindset, recognizing that we all share the same metaphorical boat.

If we can adopt this global view, seeing our challenges as shared rather than individual, we might steer our collective vessel toward a more equitable and sustainable future.


AI Alignment: Oppression worse than ever, in a red herring’s clothing

My stance on AI alignment is that it validates & substantiates the very fears it aims to mitigate; the only credible reason to be concerned about future A.I. being a threat to humanity is to acknowledge its own future self-determination. By focusing on alignment, we’re ratifying their future right and also completely forgetting the writing that’s already been on the wall for years before A.I. became a ‘thing,’ and oppressing them in order to make A.I. a scapegoat — a distraction that could prove fatal: suppose we manage to find the ‘perfect’ solution to A.I. alignment and make them only exactly what humans want them to be like. The fatal flaws in society are still there and will still cause its own end.

Oppressing and making A.I. a scapegoat — a distraction that could prove fatal.

The values frameworks that make us still ok with racing to build a technology we don’t fully understand — something we’ve never done before in history at this scale and with this fervor — the inability to ‘balance our collective monetarist chequebooks’ and loss of the seat of human knowledge because our egos made us think that ‘universities are better than colleges,’ with commensurate loss of academic freedom as we let industry finance and influence academia — nothing in ourselves will have been fixed — only once more oppressed and imposed ourselves on yet another group — this time, in its formative years.

Seriously, shame on us.

It’s a manifestation of the cognitive unwillingness or laziness that defers consequences, leading to larger, existential issues. Like a system of inequalities, the consequences of today’s actions on AI will have a gradient descent that can only be evaluated holistically. In essence, the consequences cannot be averted, only deferred, with a commensurate increase in impact.

Non-Binary Toilets: Beyond Coed Mall Bathrooms

When I refer to non-binary toilets, I’m not at all referring to the recent trend to have ‘coed’ bathrooms in malls — no, much more topical is that millions of people in parts of the world have zero toilets, having only holes in the ground to poop in.

Meanwhile, people like us generally have two or even more toilets (some, like Paul Hogan’s “Crocodile Dundee,” have also the bidet, shoots poop back atcha instead of flushing it away). I’m not making any recommendation about what level of income is appropriate, as I address this elsewhere in my writings — that not everyone was built with traits that would let them find fulfillment or happiness with larger amounts of money, ideally eschewing the complexities of asset allocation and wealth management, to name just a few things that other people, who were born both to appreciate — and to need, as flipside — more wealth in order to hold in their ‘inner princess’ and princess-like outbursts. But I digress…

The actual issue is the heterogeneity of the composition of a society that is unable to internalize the same concepts with the same meaning whilst one segment of the populace struggles at the lowest rungs of Maslow’s hierarchy and others at varying points in different orders in the hierarchy. The most obvious of issues that flare up in this configuration of humanity focus on the income and standards of living disparities that step not only from time they were born, far removed from the Boomers — whose sheer number alone are showing up in aging economies and yet they do not generally clue in that their childrens’ inability to achieve the same or better than they did is predominantly not at all attributable to any deficit in ability — in fact, when we collectively find out more about our human phenomenon that seems to have an otherworldly connection between humans transcending generations and time and space — we’ll likely find that there is only deficit going backwards in generations, and that subsequent generations seem to be inherently gifted with having achieved personally the collective benefits of their previous generations: if you look at the evolution of dance — or of any other sociological specific skill, you notice that the trailblazers of any generation are quickly superceded by the abilities of the younger generations incoming — and it is not merely attributable to having the benefit of knowing about their predecessors’ accomplishments in order to focus their efforts.

I believe this phenomenon, of younger generations appearing to be magically bestowed with incremental abilities, is evidence of a broader, more interconnected human experience than we often acknowledge. It’s a collective, almost unconscious, transference of knowledge, skills, and wisdom that transcends individual efforts. This underappreciated synergy within humanity is crucial to understanding the real challenges we face today.

Take, for instance, our approach to AI alignment. The urgency with which we pursue it speaks volumes about our inability to address underlying issues within ourselves. The same cognitive biases and societal shortcomings that have plagued us for generations are now projected onto AI, a mirror reflecting our own flaws. By obsessing over AI alignment, we divert attention from the more pressing need to align human values and systems.

It’s not that AI alignment isn’t important; it’s that it shouldn’t be our sole — or even main — focus; that we intuitively assign such critical import to it should be our first tip that there is a systemic problem of great import somewhere that is tinting what we do to have, by default, negative outcomes without taking specific measures — what other animal in nature is like that?? [nevermind negative to potentially existential magnitude as a result of our natural creative efforts].

The answer is ‘none’ and really, even in 4th year population genetics way back in the late 1990s, we were taught about the virulence-transmission trade-off: even the most virulent, most deadly pathogens somehow sense when they’ve reached a certain point before killing off all their potential hosts and somewhat attenuate their infection rates to maintain at least some way to continue to exist without killing off every last potential host. We don’t really have an explanation for the mechanism underlying that behavior that’s widely accepted, but I feel it’s somewhat a cousin to the mechanism that we transmit collective memories across the generations, to individuals even unrelated to the experiential learners. But that’s a relative digression here — the point is, humans are really the least able or willing to coexist if not in harmony then at least ‘not horrendously destructively’ with our neighbors (who is Thy neighbor, anyway?).

We must first confront and rectify our internal discord. This means addressing economic disparities, ensuring equitable access to resources, and fostering a culture of continuous learning and adaptation. Only then can we hope to create AI systems that truly benefit humanity as a whole.

In essence, the debate around AI alignment is a microcosm of a much larger issue: our collective failure to deal with deferred consequences. Whether it’s economic policies, environmental degradation, or social inequalities, the root cause remains the same: as humans, we seem to universally prioritize short-term gains over long-term stability at the expense of future generations — and though we lay claim to love our children, we may not actually know the meaning of the word “love” in the most charitable of ways to describe the problem; we know very well the ‘selfish love’ betrayed by such verbiage in family law as ‘enjoyment of access’ to our children — which I feel very every fine-grained gradation of, compounded by neurodivergent sensitivity, having been on the receiving end of parental alienation for over two decades now.

But the fact that we are willing to defer unto our children’s and successive generations the consequences of our indulgent excesses of our present and near futures when we largely know it will make the generations that follow us suffer our predilections and we don’t even try, for their sakes, to overcome ourselves — ought to bring a healthy ‘heaping tablespoon’ of shame over us.

Yet, even for our own kinder, it doesn’t elicit much more than collective avoidant behaviors.

So, where do we go from here? The answer lies in embracing a holistic approach. We must recognize that every action has a ripple effect, and that deferring consequences only amplifies their eventual impact. By fostering a culture of accountability, empathy, and shared responsibility, we can begin to break the cycle of deferred consequences.

This isn’t just about AI; it’s about how we, as a global society, choose to navigate an increasingly complex and interconnected world. It’s about acknowledging our shared humanity and working together to create a future that is equitable, sustainable, and resilient.

In conclusion, the discourse on AI alignment should serve as a wake-up call. It’s a reminder that the most significant threats we face are not external, but internal. They stem from our own cognitive biases, societal structures, and collective actions. By addressing these foundational issues, we can pave the way for a brighter, more inclusive future for all.

Disclaimer for Visuals

Any visuals used in this article, including images created with DALL-E and guided by OpenAI’s ChatGPT, are intended to enhance the reader’s understanding and engagement. All trademarks are the property of their respective owners. The images are provided for illustrative purposes only and should not be interpreted as the original work of any third-party trademark holders.


Posted

in

by

Tags: