B I A S


“The betas be BIG TIME HATIN’…” © — DTO™

The following subject matter itself is subjected to the viewpoints of the readers, however, knowing the history between the [predominantly white] LGBTQ “movement” and the continuing plight the ADOS (American Descendants of Slavery) has always been in the nature of a powder keg stretching beyond the plastic explosive-limit, ready to release the energetic angst that has been brimming considerably for decades. Behavioral repellant to their very own signature approach to social paradigms, the white LGBTQ adherents have decided to take it a step further by relegating to a position most familiar with their least favorite interest–“coding”.

According to the study, the use of the “n-word” online used by African-Americans was flagged even though its use is culturally more acceptable and a term often used in AAVE as a non-hate speech by other African- Americans. However, there are instances where the “n-word” is used in hateful terms and the algorithm is currently unable to tell the difference at this time.

Nicole Martin, author of Forbes article, “Google’s Artificial Intelligence Hate Speech Detector Is ‘Racially Biased’, Study Finds”

The machine learning-engineered surrealism in which the human experience is now anchored by the unethical “coding” practice at the hands of degreed-and-dissociated software “engineers” who masterfully whip-up an algorithm–“a process or set of rules to be followed…”— is in itself, a variation of dissociation that is programmable. The dissociation is clarified in one-half of the prior definition: “…to be followed“. The issue of bias in artificial intelligence, machine learning, natural language processing or algorithmic output is the fact that this is purely programmatic, or, “…code this to get that”. Despite the misapplication of “the algorithm”, there are no language police in the world nor any “coders” with the abilities to govern the digital syntactic sugars that comprise the algorithmic commands given in either language.

Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful,” said the report.

Source: Forbes article


Maarten Sap, PhD. In reference to what I stated in the opening paragraph, there are representatives of the predominant white LGBTQ community who have no business at the wheel of artificial intelligence or its implementation. Maarten’s one of them (proud member of the queer-community) and I deem him as a prime offender in the case of algorithmic bias towards Blacks (ADOS). White LGBTQ-members are charged with the impetus to stack obstacles against ADOS / Foundational Black Americans. Perhaps, this is a pre-Millennial stance as an imaginary understatement, but the strategies that [those against ADOS] are themed as “intersectional”. There are so many avenues of developmental approaches in the game of AI, machine learning / deep learning, natural language processing and so forth, that it shouldn’t be all that much of a shock that “coders” would implement digital time bombs to go off synchronously across the tech-globe. Weaponizing AI against ADOS is just the start and the components of the faulty system are now being motivated even more. When you allow undiscerning personalities access to keyboards, programming languages and IDEs, they will input wild-eye social standards that will breach the openness of what should be deemed as acceptable. They have no intention of conveying human-grade modesty. See, “coding” has particular principles that are geared on subverting under the prowess of subtlety. To be adversarial is the nature of their tuned programming.

…ar-tih-fich-shäl

Image Credit: AP Photo / Eric Risberg


With an initial glance, it shouldn’t be all that difficult to tell that the algorithm (or algorithms) have to be programmed to host characterizations that are firstly humane and not with a fabricated anthropomorphic viewpoint. Perspectives require human input; computers do not possess the innate agility to “think”, just approximate and incorrectly apply what has been interpreted as output. In other words, there’s no way on Earth that an AI is going to conjure up bias [against ADOS / Foundational Black Americans] on its own accord. As far as the “training data” goes, we have to look at the source of that “training data”. What people forget is that artificial intelligence is largely just a research topic. Hubris aside, not many have taken the charge to conduct proper research on human intelligence, so how could something as intangible as intelligence be replicated as a “learning” aspect for computers and/or computerized machinery? Sally Public is none the wiser. The only way to have AI accepted [by everyone] with the exception of the most scrutinizing individuals (myself especially) is to have the code be able to learn on its own–with guidelines intact. The inclusion of guidelines prevents the AI from learning, therefore, never truly being intelligent.

It’s more to it than a computer “studying” mental faculties through the use of computational models. Ask yourself this question: Why do you think human-level intelligence and intuition is capable of doing a proper causality analysis? Show me the consensus based on human-level intelligence has been achieved as of yet. Truth is, humanity has yet to reach the means to perform a proper causality analysis perfectly because causality hasn’t been defined in perfect fashion. Even deductive reasoning doesn’t narrow things down in the same manner that humans do. This is why algorithms cannot narrow down the latent variables the way humans do because those latent variables might have been considered in the first place. Hence the reason why those latent variables don’t show up in the “training data”. With the “supervision” of a human analyst, the algorithm would most likely be able to associated [object 1] with [object 2] and therefore, collect data from those “matching” associations.

flawed


So, what does this all say about images, in regards to “facial recognition”? An AI program “discovers” a basic law of physics by sifting through dynamic motion data with nothing known of these processes and “armed” with only basic arithmetic and/or algebraic operations. Eventually, the program will produce mathematical relationships that describe the behavior of the data (think Hamiltonians). The issue at hand–the bias directed against, largely, Black people–is when an AI program is applied against complex data sets and produce some sort of manufactured “law” that is utilized to describe the behavior of the data (the output/outcome). Doubling-down on the issue, how do the so-called “scientists” begin to understand the processes that give rise to this “law”? The outcome is the product of that such “law”. If the output from facial recognition states that the behavior [derived as action from the data] is described as white men are to be labeled as terrorists, then that is the observable outcome in accordance to the “law”. Do you believe for a second that the current social climate that is predominantly governed by white men would allow such a “law”, of that nature, to be industrialized? The greatest issue currently with AI is how humans use or intend to use it.

…confusion on the right…

…frustration on the left…

Image Credit: REUTERS / Aly Song


Some people need to play Catch Up! in regards to the latest criteria as far as what constitutes as artificial intelligence. The most recent exemplification of this is the recent “disagreeable” spirits of Alibaba’s Jack Ma and the “Cheech” to Joe Rogan’s “Chong”, Elon Musk. I’m not going to speak much on this with the exception that Jack Ma seems to have a better communicable comprehension of what AI can do as oppose to Musk’s infantile pandering that seems highly impressionable and extremely influenced by a Saturday afternoon of perusing through comic books and penning johnny-come-lately fan-mail to the estate of the late Stan Lee. However–and I guess–in “defense” of Musk, I would equate his perspective to the military’s definition of intelligence; a 1950s’ era Nike-style RADAR (most likely developed by the U.S. Army, styled by 1940s’ CDMA technology) that extends human vision beyond many kilometers [including bore-sighted visible light tracking]. This is deep in the microwave band, permitting visibility under conditions where a human would be troubled. In other words, this is a situation in which the computer wins against the human in an actual real world application. Then again, computers have been doing this [and continue to do so] sans “artificial” intelligence.

The adversarial approach by militaristic interest will only amplify the biases since “biases” serve the detrimental purpose of emphasizing unwarranted advances against those in opposition of weaponized “tech”. A militaristic interest will be hard-pressed to contain the capabilities of such systems. A particular importance that will surface is the utility to produce strategies for conflict resolution since the artificial intelligence harbor a need to understand human behavior (i.e., strengths and weaknesses).

In the end, computers are able to do some tasks better than humans. So does paper.


-Desmond (DTO™)

Advertisements

Submit a comment

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.