Dem Zoe Lofgren Sounds Like an Idiot Trying to Prove Google Isn’t Biased

3
483

Democrat congresswoman Zoe Lofgren claimed she wanted to address Google’s so-called manipulation of search results during the hearing before Congress with Google’s CEO Sundar Pichai Tuesday.

Her main goal appeared to be to call the President an idiot.

“Right now— if you Google the word ‘idiot’ under images, a picture of Donald Trump comes up. I just did that. How would that happen? How does search work, so that would occur?”

She’s not wrong but her ulterior motives are glaringly obvious.

Sundar gave an explanation of sorts which she appeared to have no interest in at all. He claimed it’s based on the popularity of the idea and the “freshness” of the searches. That’s cute.

Lofgren followed up by mocking Republican lawmakers for saying Google is biased even though she just proved they might well be. Then she said President Trump only got 20 percent of the vote in the Google headquarters to prove her point. It actually proves the opposite. Zoe the dummy is looking forward to working with the brainiac — that’s a joke we hope.

It’s obvious Dems are fine with the bias at Google as long as it benefits them.

The media, including C-Span, is only showing the part of the clip they like.

This is the full clip:


PowerInbox
0 0 votes
Article Rating
Subscribe
Notify of
guest

3 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Gato
Guest
Gato
5 years ago

Funny, Donald Trump doesn’t show up on duckduckgo. Which is why I avoid using Google as much as I can.

Joe Erdman
Guest
Joe Erdman
5 years ago

Don’t google it, Bing it. It’s the lesser of two evils.

Greg
Guest
Greg
5 years ago

It’s unfortunate that all those questioning Pichai have absolutely no understand of technology, and yet ask questions. With all the staff these Members have do they not have anyone who can formulate coherent questions.

The technology of algorithms have been around since the early days of computers. One form is Neural Network Programming which is a form of “machine learning”, or AI. There are three forms of learning in the process, which basically “trains” an algorithm. There is:

Supervised Learning: Inputs and outputs are matched. There are questions and answers and input/output pairs will be able to learn by example. It can be tested by giving a new set of questions to make predictions, based on that learning, what the answer will be. Comparing the predicted and expected is measured to test the results.

Unsupervised Learning: Here only input data is passed to the algorithm. The network will try to discover the structure that underlies the data. It can be used for clustering and association problems. The result of clustering will determine a number of groupings based upon specifics.

Reinforcement Learning: This is a combination of supervised and unsupervised learning. A restricted amount of output information is provided about the input data. The information takes the form of a statement which the predicted output is ‘good’ or ‘bad’, without the algorithm being told what improvement is needed.

Some have correctly questioned what parameters these algorithms use. When it concerns social interactions those parameters may include what the programmer sees as a Utopian outcome. It is one thing to incorporate AI in computer algorithms to excel at Chess, for example, but determining a social construct applied to algorithms is living in fantasy. It is why social media demonstrably fails in accomplishing it’s goals.

The human experience cannot be defined in machine terms. Philosophers have asked the insurmountable question. Where do our thoughts “come from”. It is a question that cannot be quantified. Therefore, attempting to use a computer algorithm to essentially “mold” the character of a people is frivolous and a dangerous endeavor. While the primary goal is to remove “hate speech” from all platforms may sound like a worthy goal, it is a path that can only result in anarchy. People will become increasingly attuned to the concept and will, inevitably, see more of it and necessarily need to be incorporated into the system.

Is it really a responsible goal that tech companies want to have their platforms a “pleasant” experience anytime one enters that realm. Clearly this is an ideology of a Utopian existence. This isn’t even possible in our own personal lives. Has a spouse ever had an argument where the other said something deemed hateful. Has anyone ever had a relationship with another that was total bliss. It would seem this is the society they would seek to envision and accomplish. So long as we have “thoughts” we cannot expunge, those which are adverse, there can never be utopia. When dealing with individuals by “expulsion” it will create a repressed sense of self that
can be more destructive to society as a whole. In any event, it is literally impossible to create the pleasant environment in which the tech companies so badly desire.