How Questioning My Manifestation Process May Lead to Enlightenment

Within my bubble of limited awareness, my manifestation process works like a computer input-output (I/O) system. As in a computer, the outcome of a process can provide feedback that modifies the input to the next cycle.

Seems like a fairly straight-forward and simple process – change the input, change the outcome, repeat. Each cycle of the manifestation process provides input to another. Conservation of energy!

Each manifestation cycle starts with an intention that provides input to the process, the outcome of which illustrates and often amplifies the intent. The amplified illustration of intent offers feedback that is much easier to measure than was the input intention that started the process. In simple terms:

Intention in -> amplification process -> amplified illustration of intention out => feedback => intention in to next cycle…

Garbage In, Garbage Out (GIGO)

Much of the time, my quiet inner intentions are “hidden” behind much louder outcomes that grab my attention later. That means I feel that I’m aware of my intentions later along my timeline than when I set them. Right now, for example, I’m dealing with the manifest illustration of an intention I may have set into process minutes, hours, or days ago – maybe longer.

And then there is the data issue:

Spend enough time in deep enough conversation with artificial-intelligence experts, and at some point, they will all offer up the same axiom: garbage in, garbage out. It’s possible to sidestep sampling bias and ensure that systems are being trained on a wealth of balanced data, but if that data comes weighted with our society’s prejudices and discriminations, the algorithm isn’t exactly better off. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them. An algorithm, after all, is just a set of instructions.

Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data. (Groen, 2018)

What happens when the algorithm trains on poor data? The algorithm simply processes and amplifies whatever data I feed it. When my intention is unclear (garbage in), as it might be when emotionally charged, for example, the machinery of the program will supply my senses with feedback that looks unclear (garbage out) – amplified!

Questioning Manifestation

To manage feedback, I apply awareness-inducing challenge questions. To the degree I’m aware of these questions, I become aware of my true intentions – and maybe myself in the process.

First, let’s take charge of the process by owning it: “My intention led to the outcome I perceive.” Then, let’s reverse the manifestation process to discover our hidden agenda:

  • “What” questions turn observation of outcome into feedback – “What do I feel, hear, and see?” (the manifestation/illustration of intention)
  • “How” questions turn feedback into expressions of intention – “How does this [observation] illustrate my intention?”
  • “Why” questions expose and define intentions – “Why does my intention feel, sound, and appear as I feel, hear, and see it?”
  • “Who” questions challenge the identity of the source of intentions – “Who am I who perceives this as I do?”

What might happen were I to extend this questioning process to EVERY manifestation? Even those that appear to me to be owned by someone else? After all, aren’t my “observations” of others actually MY perception of their process? Am I not the one doing the perceiving and interpreting in my world? When someone I care about is having a difficult time, is it not me who perceives them having a difficult time? I may agree with others that my friend is having difficulty, and yet, is it not me who perceives others in agreement?

Beyond Manifestation

At some point, I may apply the above questions to EVERYTHING I perceive. Until then, questioning that which appears as mine alone will provide a window into the hidden world of my ego and maybe light the passageway into full enlightenment.

Sources:

  • The Walrus Online, How We Made AI As Racist and Sexist As HumansBy Danielle Groen, Illustration by Cristian Fowlie, Updated 8:56, May. 17, 2018 | Published 10:19, May. 16, 2018.
  • Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Paperback book available on Amazon.com.