Another Poll Shows Medicare For All Is A Winning Issue For Democrats


According to a new CBS news poll, 56% of people
here in the United States believe that providing healthcare is something the federal government
should do for the people of the United States. 56% say, hell yeah, there needs to be a government
run healthcare program. And what’s shocking is that nearly half of
these people who say the government should be running a healthcare program, nearly half
of them also say, even if it means getting rid of private insurance. Now, this same CBS news poll says that roughly
75% of the people they talked to said they actually do like their insurance. 75% like their insurance, which obviously
means 75% of people have never had to stay for a few days in a hospital or go in for
an emergency surgery because that number seems a little bit high. But nonetheless, you can like it but the portent
thing is 56% say government gotta get in the business of providing healthcare. Even if it means we lose what we’ve got because
we’re going to gain something better. The point of this poll. The reason this is significant is because
this is the latest in a long line of polls showing that people want government run healthcare
is because last night at the democratic debate, you had Democrats up on that stage, alleged
Democrats. You had Klobuchar, you had Buttigieg, you
had some other folks, Biden and Harris saying, no, people need to have choice. We can’t just have a government run program. Maybe make it part of Medicare for all who
want it. As we have repeatedly said, Medicare for all
who want it is basically being set up to fail. Just like the affordable care act, Medicare
for all who want it can be dismantled because if you only have certain percentage of the
population that’s below 100 who say, yes, I want this plan later on, the government
can come in when Republicans control everything again and eventually they will. Eventually Democrats will, but they can come
in and say, we’re just going to get rid of it. They have that authority. Yeah. People might get pissed off, but they’ll also
forget about it. Cause as a whole, as a public, we’re kind
of stupid. That’s just the way we are. But if you enact it and you say, this is the
only game in town, this is what everybody gets rich and poor alike. This is your healthcare. You don’t have to do anything. You just go to a doctor, you fill out your
name, you get diagnosed, and you leave. They do that. You can’t take that away. I mean legislatively speaking, you could,
but you couldn’t do it without massive public backlash. And that’s why on this particular issue, again,
and this is based on the polling, you go big, you go all the way. Because if you half ass it, like they half-assed
the affordable care act, it’s going to get destroyed. It’s going to get dismantled and it’s going
to end up being something that people aren’t even sure if they like it or if they don’t,
or even if it does anything for them. You got to go big. This is what the public wants. Medicare for all, anything less is a losing
issue for the Democrats.

Nancy Pelosi spent months trying to convince the American people Democrats would not rush into impea


Nancy Pelosi spent months trying to
convince the American people Democrats would not rush into impeaching President
Trump but one of Pelosi’s top lieutenants just spilled the beans on
national TV about the Democrats true intentions and
Nancy Pelosi allowed for a surprising confession about when she will impeach
Trump Congressman Jerry Nadler chairs the Judiciary Committee which affords
him nominal power as to when Democrats can bring an impeachment inquiry and
draft articles of impeachment however the reality of the situation is nothing
starts without Nancy Pelosi’s explicit approval Nadler appeared on MSNBC to
give the network’s liberal viewers an update on impeachment and Naylor stunned
the audience into silence by revealing the Democrats could introduce articles
of impeachment as soon as October if we decide to report articles of impeachment
we could get to that late in the fall in the latter part of the year Naylor
declared Nadler expanded on this revelation by explaining the Democrats
believe the court cases relating to access to the secret grand jury material
from the Mueller report and the Trump administration’s claim of executive
privilege blocking former White House Counsel Don McGann from testifying
should be wrapped up by the fall I think that we will probably get court
decisions by the end of October maybe shortly thereafter we’ll have hearings
in September and October with people we don’t witnesses who are not dependent on
the court proceedings Kneedler continued Democrats believe the grand jury
material which by law remains secret as well as Megan’s testimony could contain
the smoking guns the party needs to supercharge public sentiment for
impeachment Nancy Pelosi slow-walking impeachment was for public consumption
the speaker knows impeachment polls poorly but Pelosi also knows the party
base elected Democrats to the majority to impeach the president and view a
failure to fulfill that promise as a major betrayal Pelosi is walking the
tightrope and that is why Kneedler went on MSNBC where Democrats can speak
directly to their party’s voters and assure them impeachment was right around
the corner history says if the Democrats moved to introduce articles of
impeachment in the late fall then the process will wrap up around the new year
with a Senate trial in 1998 the Republican Congress
initiated an impeachment inquiry in early October the House passed articles
of impeachment in December and a Senate trial concluded in February 1999 if the
Democrats follow this timeline it will mean senators running for president
Bernie Sanders Amy Klobuchar Kirsten Gillibrand Cory Booker Kamala Harris and
Michael Bennet will be voting on removing the president from office in
the run-up to the February 3 Iowa caucus that will guarantee impeachment
dominates the presidential election and Donald Trump and his supporters see that
as a political boon to the president’s reelection prospect

Can Facebook And Google Detect And Stop Deepfakes?


Deepfakes have started
to appear everywhere. From viral celebrity face swaps to
impersonations of political leaders, it can be hard to spot the
difference now between real and fake. We’re entering an era in which our enemies
can make it look like anyone is saying anything at any point in time. And the digital impressions are
starting to have real financial repercussions. In the U.S., an audio deepfake of a CEO reportedly
scammed one company out of 10 million dollars. In the UK, an energy
firm was tricked into a fraudulent transfer of 220,000 euros. And with the 2020 election not far
off, there is huge potential for weaponize deepfakes on social media. Now tech giants like Google, Twitter,
Facebook and Microsoft are stepping up. With Facebook spending more than
10 million dollars to fight deepfakes, what’s at stake for businesses
and what’s being done to detect and regulate them? One of the
most well known deepfake creators is Dr. Fakenstein, or Jeff White. He walked us through the process. So let’s say we’ve got
source footage of Trump. I am concerned I wouldn’t want
to see a violent crackdown. And I have my destination
footage: The Little Rascals. This column here is Trump’s
face over the Little Rascals. Essentially a deepfake is an A.I. generated video or audio clip of a
real person doing and saying fictional things. A computer uses its deep
neural network, hence the term deepfake to learn the movements or sounds
of two different recordings and combine them in a realistic way. And there is an emerging
market for creators of deepfakes. I’ve had hundreds of work requests since
I put out some of my videos. Most people just want to have a
laugh and be entertained by it. White creates videos for
Jimmy Kimmel Live. Ladies and gentlemen, all rise for
Brooklyn Heights. And next month, he’s quitting his day job at a dairy
farm to create deepfakes full time. A handful of well known creators like
him use it solely for satire. It can be used for
more publicly available fun, too. Like with Berkeley’s Everybody
Dance Now project. The technology is nothing new. It’s part of how major Hollywood
studios include actors in critical roles after they’ve died. Lord Vader
will handle the fleet. It helps gaming companies let
players control their favorite athletes. What is new: the process
has become cheaper and easier. Towards the beginning of 2018 then a
couple of important projects, one of them was known as FakeApp, the other
was known as Faceswap, became readily available. And in the last year, year
and a half, the pace of innovation around the creation of deepfakes
has accelerated quite a bit. And you can just go online and
you can find tools, download them. All I have to do to create a deepfake
at this point in time is to click 10 times. Whoa! It’s so easy that Symantec made a
deepfake of me without much notice before our interview. The amount of material
that you need to create these things is 10 to 20 seconds of
video and maybe a minute of audio. This wouldn’t even be happening. This is something that takes about, you
know, not a lot of source material. But just imagine if you had
hours and hours and hours of video at hand, we could create a really
good likeness of you from different angles, saying different things. So what’s at stake for companies or
individuals when seeing or hearing is no longer believing? The fact that
people can see something like this, believe that it’s true and collectively the
markets can react to it are a huge concern for people. The newest attack that we are seeing,
which people had not anticipated, in some sense is the use of the
audio deepfakes to cause financial scams. I would like to transfer
five thousand dollars to Adam. In the last few months, security
firm Symantec says it’s seen three attempted scams at companies involving
audio deepfakes of top executives. In one case, the company lost 10
million dollars after a phone call where a deepfake audio of the CEO
requested a transfer of the money. The perpetrators still
hasn’t been caught. Here’s a station you might like. It’s no different than what you would
see for like what powers like Alexa or some other products like that. And the CEO will answer whatever question
you have because they can be created on the fly. But deepfakes
also offer profit opportunities to some companies. You’d have an actor that
would license their likeness and then at a very low cost, the studio
could produce all kinds of marketing materials with their likeness without having
to go through the same level of production that they
do today, you know. And so I can imagine lots and lots
of audio being produced using the voice of an actor, the voice of someone
other VIP and all of that being monetized. Just like we practiced. Ready? Now Amazon is doing just that. Today in Los Angeles,
it’s 85 degrees. Say my name. Woohoo. It announced in
September 2019 that Alexa devices can speak with the voice of
celebrities like Samuel L. Jackson. On Instagram, A.I. generated influencers like lilmiquela are
backed by Silicon Valley money. The deepfake videos of these virtual
influencers bring in millions of followers, which means a lot of
potential revenue without having to pay talent to perform. And in China,
a government backed media outlet introduced a virtual news anchor who
can work 24 hours a day. I will work tirelessly to keep you informed
as texts will be typed into my system uninterrupted. But the potential for misuse is high. So one of the most insidious uses of
deepfakes is in what we call revenge porn or pornography that somebody puts out
to get back at somebody who they believe wronged them. This
also happens with celebrities. But certainly things like this that
would ruin the reputation of a celebrity or somebody else in the public eye
are going to be top of mind for these social media companies. Also top of mind for social
media companies: the 2020 elections. Researchers expect that in the 2020
election, deep fakes will probably be deployed. Will it be deployed by
a foreign nation looking to cause instability? That’s possible. And that could be significant. In that case, you would have
a candidate saying something totally outrageous. I own one pair of underwear. Or something that enflames the markets
or something that puts their chances of being elected in question. There is also a concern about faking
words from leaders of countries, from leaders of organizations like the IMF
that would have a significant consequence, even if it was short term,
on markets and even on global stability in terms of conflict. That he was engaged in a cover up. In May, House Speaker Nancy
Pelosi accused Facebook of allowing disinformation to spread when the company
refused to take down a manipulated video of her. In response, Facebook updated its
content review policies, doubling down on its refusal to remove deepfakes. Two British artists tested Facebook resolve
by posting a deepfake of CEO Mark Zuckerberg on Instagram. Whoever controls the data
controls the future. Facebook held its ground, refusing to
remove it along with other deepfakes like those featuring Kim
Kardashian and President Trump. I pulled off the biggest heist of
the century and people just have no idea. Now there’s a question,
though, about whether misinformation, whether these deepfakes are actually
just a completely different category of thing from normal kind
of false statements overall. And I think that there’s a
very good case that they are. Now Facebook is trying to get ahead
of deepfakes before they make it on its platforms. It’s spending more than
10 million dollars and partnering with Microsoft to launch a Deepfake Detection
Challenge at the end of the year. Facebook itself will create deepfakes with
paid actors to be used in the challenge. Then pre-screened participants
will compete for financial prizes to create new open source
tools for detecting which videos are fake. Twitter told CNBC it challenges eight to
10 million accounts per week for policy violations, which includes the use
of Twitter to mislead others. As a uniquely open service,
Twitter enables the clarification of falsehoods in real time. We proactively enforce policies and use
technology to halt the spread of content propagated through
manipulated tactics. And it recently acquired a
London-based startup called Fabula A.I., which has a patented A.I. system it calls geometric deep learning
that uses algorithms to detect the spread of misinformation online. At YouTube, which is owned
by Google, community guidelines prohibit deceptive practices and videos are
regularly removed for violating these guidelines. Google launched a program last
year to advance detection of fake audio specifically, including its
own automatic speaker verification spoof challenge, inviting researchers
to submit countermeasures against fake speech. One small cybersecurity company
has already launched an open source tool that’s helping create
algorithms to detect deepfakes. The way our platform works is we’re
pulling in billions of pieces of content on a monthly basis. Text, images, video, all
kinds of stuff. And so in this case, as the
video flows through our platform, we’ll now route it through deepfake detection
that says like, deepfake, not deepfake, and if it is
a deepfake, alert our customers. Baltimore based ZeroFOX intends to be the
first to have customers pay to be alerted of deepfakes. Meanwhile, academic institutions and the
government are working on other solutions. Another approach is to put
a registry out there where people can register their authentic content and
then other people can check with the registry to see if that content
is in fact, authentic or not. The Pentagon’s research enterprise,
called Defense Advanced Research Projects Agency, or DARPA, is fighting
deep fakes by first creating its own and then developing technology
that can spot it. The people who are defending us
against deepfakes are using A.I. just as much as the people who
are creating them are using A.I. It’s just that those who are creating
deepfakes seem to have a running start on this. I don’t think that
there’s a silver bullet yet developed. The technology is really only
one or two years old. And so we’re at an early stage. And at least until Facebook
announced monetary prizes, the business potential on the detection
side was small. There’s not really a market segment
for deepfake protection that’s mature yet. The tech is new. The threat landscape is just
beginning to emerge or whatever. So we’re the first or amongst the
first in terms of companies to develop and ship the technology around this. One of the best things about the Facebook
challenge is that it brings in a lot of people who probably weren’t
interested in this technology to try and work on it, and I think what
we really need in the space with deepfakes is finding something that is
novel that we haven’t thought of before that works
for detecting these. Even if you can detect deepfakes,
there is currently little legal recourse that can be taken to stop them. For the most part, they are legal
to research and create at this point. Can you do an impression of
him? It’s going to be great. For the most part, if you’re a
public person, you really don’t own the rights to your public appearances or
even videos taken without your consent in public. I genuinely love
the process of manipulating people online for money. There are some things
that are unclear about the law, but for the most part, this also
applies to just regular people who post videos of themselves on
Facebook and YouTube. But if your image is used in
adult content, it’s likely illegal in states like California, where revenge porn is punishable
by up to six months in jail and a $1,000 fine. So a lot of porn websites, for
example, have declared that they are not going to allow hosting of deepfake
or uploading of deep fake-based porn. You can call me Sam. In China, a
deepfake app that allowed users to graft their faces onto popular
movie clips went viral. But it was shut down earlier this
month over privacy concerns because the app maintained the rights
to users’ images. In June, New York Congresswoman
Yvette Clarke introduced the DEEPFAKES Accountability Act in the House. It would require creators to disclose
when a video was altered or generated and allow victims to sue. There is no way that a law
like this will enforce a lawsuit against somebody who is sitting in a country
in Eastern Europe or anywhere else across the globe that has already proven
to be hostile to the U.S. when it comes to enforcing
our laws around cybersecurity. I don’t like people passing
off videos as real. My stuff’s clearly satire. I don’t think anyone is
mistaking my stuff for real. Satirical deepfake creators like Dr. Fakenstein take ownership of their work
and are even monetizing it. But that’s different for
those with malicious intent. If you’re intent on publishing a deepfake
and not having it traced back to you, there are plenty of ways
that you can remain anonymous. For better or worse, deepfakes
are only getting more refined. The challenge will be whether the
technology to detect and prevent them can keep up. I think that the
people who are creating deepfakes for nefarious reasons are way
ahead of us. I think that they have access to A.I. that is more advanced than what we
have working on the solution side and certainly access to more resources than we
have so far given people to fight against the problem. Hopefully that will change with
what’s taking place now. May sound basic, but how we move
forward in the age of information, is going to be the difference between whether
we survive or whether we become some kind of dystopia. God bless you, your families, our
children, and God bless the United States of America.