SN 990: Is Telegram an Encrypted App? - CrowdStrike Exodus, DDoS-as-a-Service, 'Active Listening' Ad Tech?

TWiT

Security Now (Audio)

SN 990: Is Telegram an Encrypted App? - CrowdStrike Exodus, DDoS-as-a-Service, 'Active Listening' Ad Tech?

Security Now (Audio)

It's time for Security Now. Steve Gibson is here.

And of course, as always, there's lots to talk about.

How many customers did CrowdStrike lose, if any?

And why Steve says, I'd still be a customer.

We'll also talk about Telegram's founder, owner, and CEO arrested in France

with what many are seeing as an attack on encrypted communications.

But Steve's going to do a deep dive on Telegram's,

and I'm going to put this in air quotes, encryption.

He says, no, it's not really.

All that and more coming up next on Security Now.

Podcasts you love.

From people you trust.

This is Twit.

This is Security Now with Steve Gibson.

Episode 990, recorded Tuesday, September 3rd, 2024.

Is Telegram an encrypted app?

It's time for Security Now, the show where we talk about your security,

your privacy, and what's going on in the world around us

with this cat right here, Mr. Stephen Tiberius Gibson,

the host at GRC.com.

Hi, Steve.

Yo, Leo.

How are you?

I'm looking at these episode numbers, and it's getting pretty exciting.

We're inching up.

And thank goodness I let everyone know a long time ago

that it wouldn't be a problem if I didn't.

It wouldn't be over.

Otherwise, it'd be really sad.

Oh, yeah.

See, aren't you glad now?

9-9-0-9-9-1.

The countdown.

Oh, not good.

It'd be so sad.

Yeah.

So now it's a countdown to nothing.

Yeah.

Well, no.

Actually, I was telling the truth a long time ago when I said,

oh, I'm going to have to rejigger my technology to handle four digits

because, you know, back when I wrote it, we had one.

Or then maybe two.

But I thought, oh.

We're never going to need three.

But what the heck.

I'll just set it up for three.

We could do hex.

No, no, no.

We're going to go 9-9-9.

That'll be a celebration.

That's sometime in November.

And then we're going to go right on into the four digits.

However, I'm really sure we won't need five.

That's not going to happen.

I think you're safe on that.

I think we're safe.

Both of us will be just a memory.

So lots of interesting stuff to talk about.

As I know,

you've been talking about,

but we haven't talked about it.

You touched on it here.

And it had just happened for last week's podcast.

And I didn't know what was going to happen.

But Telegram's founder, owner, and CEO has been arrested in France.

So we're going to look at what that means.

Also, one year after Microsoft began offering free cloud security event logging,

how's that going?

Also, to no one's surprise, CrowdStrike is losing customers.

But how many?

Microsoft, on that topic, is going to meet with CrowdStrike

and other vendors to discuss new solutions.

We'll talk about that.

Also, that Yelp is not happy with Google.

You know, did or does Google put their thumb on the scale?

Yelp thinks so.

Where do you go to purchase yourself some DDoS when that's what you want?

Like a cup of DDoS?

Yeah.

How about sending a Telegram?

And Chrome exploits are becoming more rare and difficult to find.

So Google has upped the ante.

And, Leo, believe it or not, the Cox Media Group is still promoting.

They're incredibly, I mean, just astonishingly,

privacy-invading so-called active listening capability.

We're going to revisit that.

Also.

How about secretly having foreign agents doing all of your work for you?

What could possibly go wrong with that?

And the reason this podcast is titled,

Is Telegram an Encrypted App?

Because that was the title given to the recent posting by our favorite Johns Hopkins cryptographer, Matthew Green,

who has become...

who has become increasingly annoyed by Telegram's claims of being an encrypted messaging platform.

So he finally asks the question, is Telegram an encrypted app?

We're going to look at that and answer the question.

That was a great blog post, actually.

I really enjoyed it.

A little surprising.

Yes.

Yeah.

He had a great job.

Yeah.

And we've been talking, as you mentioned, we've been talking a lot about Pavel Durov's arrest since it happened a week ago.

And...

Yep.

Yeah, it's quite a story, but we will get to that in just a little bit.

But first, how about a word from our sponsor?

Would you, would you mind?

All right.

Oh, no, please.

A little, a little talk for you from our, our sponsor, BIG-ID.

Yeah, I hope you know the name, BIG-ID.

They are the leading DSPM solution.

That's Data Security Posture Management.

And BIG-ID is where Data Security Posture Management is done differently.

BIG-ID seamlessly integrates with your existing technology.

It's a multi-stack, it allows you to coordinate security and remediation workflows.

You can uncover dark data, identify and manage risk, remediate the way you want, and scale your data security strategy.

Take action on data risks, annotate, delete, quarantine, and more based on the data, all while maintaining an audit trail, which is great for compliance.

With BIG-ID's advanced AI models, you can reduce risk, you can accelerate time to insight, and you can gain visibility and control over data.

You know, BIG-ID has some big customers who have even bigger data.

BIG-ID equipped the U.S. Army.

Imagine, who has more data in more disparate places than the United States Army?

BIG-ID helped them illuminate dark data, accelerate cloud migration, minimize redundancy, and automate data retention.

Pretty big job, but they were able to do it.

The U.S. Army Training and Doctrine Command quote is this.

Get ready.

This is a direct quote.

The first wow moment with BIG-ID came with just being able to have that single interface that inventories a variety of data holdings.

I've never seen a capability that brings this together like BIG-ID does.

They see your data everywhere, inside zip files, in all kinds of databases, in all different cloud and on-prem locations, in every little closet, nook, and cranny.

By the way, BIG-ID is going to be at their big CISO Digital Summit.

October 17th, 11 a.m. Eastern Time, and you don't have to be there.

You can watch it online.

Set it around the next era of data security.

This virtual summit will feature deep dives into the latest data security practices and technologies.

They'll explore everything from DSPM to AI.

AI is a big part of this because if you want to use AI in an enterprise or in the Army, you've got to know what to train it on and what not to train it on, right?

Not all the data is the same.

And they'll talk about DLP and beyond with expert-led panel sessions and interactive discussions with your peers.

They have a great lineup of speakers, a keynote from the head of cybersecurity and compliance at Denny's,

and you can get two CPE credits and a raffle entry just for attending.

So make sure to tune in, strengthen your organization's security posture in an ever-evolving digital landscape.

This is going to be a great event.

Again, October 17th.

Put that in your calendar.

Put a pin in it.

11 a.m. Eastern Time.

For this incredible panel.

Start protecting your sensitive data wherever your data lives.

BigID.com slash security now.

Lots of information there.

You get a free demo on how BigID can help your organization reduce data risk and accelerate the adoption of generative AI.

That's BigID, B-I-G-I-D, B-I-G-I-D.com slash security now.

Also, I said there's lots of information.

There's a free new report right there.

A site that provides valuable insights and key trends on AI adoption challenges and the overall impact of gen AI across organizations.

It's all there for you.

It's free.

BigID.com slash security now.

Please check it out.

BigID.com slash security now.

We thank BigID for their support of this fabulous show and the great work that Steve does.

I am ready with a picture of the week.

This is a goodie.

So, I gave this one the caption,

When the universe is suggesting that you should take the stairs, listen.

Oh, dear.

Because we have what appears to be a not that well-maintained kind of grungy elevator interior.

And it's got some instructions over the panel.

Where you push the button for which floor you want to go to.

It says, if elevator does not move, do a small jump.

It should move after.

Now, again, if you get into an elevator and you see that signage, the stairs really are looking better.

Good thinking.

Yes.

So, yes.

You know, and I don't know.

I can't.

There's some signage.

Off to the right, there's something about a guy with a mask.

It looks like delivery drivers must wear something or their own.

And it says, and have temperature, blah, blah, blah.

So, like, have their temperature taken or something.

And then down below, it says, you M, and then I see C-L-O-S, and then E-L-E.

So, like, maybe you must, what, manually close the elevator doors or something?

You must close elevator doors before pressing a button.

That would be good.

Unless you want a good view as you.

Or just jump up and down.

And that'll get the elevator going.

Do you think that's a joke?

Do you think it's a joke?

No, I think it's like there's.

It's stuck a little bit?

This is, again, you get in, you see the sign, and you get out.

And you just, and I say, so that's why the caption, when the universe is suggesting that you should take the stairs, listen.

Yeah.

Because, yeah.

Anyway, thanks, you.

I will thank endlessly our listeners.

We've got some goodies in the queue.

So, another great picture.

Okay, so, I gave this week's lead story the title, Telegram Puts End-to-End Privacy in the Crosshairs, because I think that's probably what's ultimately being tested here.

At the time, as we said, at the time of last week's podcast, the news was that Pavel Durov, the founder, also owner, and CEO of the Telegram instant messaging system, had been detained in France.

After he flew into Atlanta.

And landed in French territory on a private jet.

Next, we learned that his status had changed from detained to formally arrested.

And then, last Wednesday, he was released on 5 million euros bail and is banned from leaving France, since he's now facing charges over his responsibility, this is what they're alleging, for the many illegal, and in some cases abhorrent, things that Telegram's users have been found doing.

mail and is banned from leaving France since he's now facing charges over his

responsibility, this is what they're alleging, for the many illegal and in some

cases abhorrent things that Telegram's users have been found doing in light of

there being no content, moderation, mediation, anything within Telegram of any

kind. And they're holding Pavel responsible for that. And of course, the

reason this is intensely interesting is that, especially to this audience, is that

it brings us back to the big and still unanswered question of how the world is

ultimately going to deal with end-to-end encrypted messaging and whether

governments are going to allow their citizens to hold truly private electronic

conversations without any form of content moderating oversight.

And in the present case of Telegram, the charges which French authorities have

levied against Pavel include being complicit in running an online platform that

allows sharing of CSAM, which, as we know, is the abbreviation for child sexual

abuse material, also drug trafficking, fraud, and money laundering, as well as not

cooperating with authorities when required to do so by law.

Now, the French news outlet Le Monde reported that France's police have been

arrested for a series of crimes, including a crime involving a child.

In a recent report, the French news outlet Le Monde reported that France's police have

been arrested for a series of crimes, including a crime involving a child.

The police office that tackles violent crimes against children issued a warrant for his

arrest. And in a LinkedIn post that was later deleted, that office's secretary general said

that, quote, at the heart of this case is the lack of moderation and cooperation of

the platform, which has nearly one billion users in total, though not all in the EU,

much fewer than that, particularly in the fight against

they said pedo criminality. And the EU arm of Politico reported that the specific incident

that was cited in the arrest warrant was Telegram's refusal to identify a specific user

after being served with a judicial request. Politico wrote after viewing a document related

to the warrant, quote, the warrants for Pavel Durov and his brother Nikolai were issued after

an undercover investigation into Telegram,

led by the police, and the police said that the warrants were issued after an undercover investigation into Telegram led

by the police, and the police said that the warrants were issued after an undercover investigation into Telegram led

by the cybercrime branch of the Paris prosecutor's office, during which a suspect discussed luring

underage girls into sending, quote, self-produced child pornography, unquote, and then threatened

to release it on social media. So, you know, creeps are on Telegram, okay? According to the document,

the suspect also told the undercover investigators that he had raped a young child. Telegram did not

respond to the French authorities' request for comment. The suspect also told the undercover

investigators that he had raped a young child. Telegram did not respond to the French authorities' request for comment. The suspect also told the undercover investigators that he had raped a young child. Telegram did not respond to the French authorities' request for comment. The suspect also talked to theacles,

Mossty, the Italian state JUSTICE falei,

ffff,

of Alabama jail inexamination

media platforms for bathroom

Florida international police

LUD när

family

film

to confound court orders. Telegram unabashedly boasts. So here's from their FAQ. They said,

quote, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data

centers around the globe that are controlled by different legal entities spread across

different jurisdictions. The relevant decryption keys are split into parts and are never kept in

the same place as the data they protect. As a result, several court orders from differing

jurisdictions are required to force us to give up any data. Thanks to this structure, we can ensure

that no single government or block of like-minded countries can intrude on people's privacy and

freedom of expression. Telegram can be forced to give up data only if an issue is grave and

universal.

To this day, we have disclosed, they're saying, zero bytes of user data to third parties,

including governments, unquote. Their terms of service do state that illegal pornographic

content is not allowed on its publicly viewable areas, but that doesn't stop people from doing

that. Its FAQ says,

Take action on illegal content in these areas, which comprise sticker sets, channels, and bots.

However, Telegram assures its users that, quote, all Telegram chats and group chats are private

amongst their participants. We do not process any requests related to them, unquote. So,

in other words, within any private groups, which may include up to 200,000 people,

anything goes.

Without any supervision, and with an explicit guarantee of technically imposed privacy. So,

it should be no surprise that many investigations have found child abuse material for sale on

Telegram. However, there are some interesting details here. It's an example of an instance

where the details matter and where encryption may not mean what its users imagine. This is why

today's podcast topic will address the issue of child abuse. So, let's get started.

Today's podcast topic will address the interesting question of whether or not and to what degree

Telegram is actually an encrypted app and exactly what that term means. Our long-time listeners may

recall that I've never been impressed with Telegram's encryption from day one because

it's a perfect example of what we all know should not be done. Telegram uses a homegrown

cypher that a couple of guys just made up.

His brother, Pavel's brother.

Right. Yeah, it's like, okay, and it's got some wacky name. I'll get to it later,

but it's like the information garbling protocol or something. It's like, what? And literally,

Matthew says, WTF? It's like, he's like, whoa. Anyway, they did this well after the world

had learned that Telegram was an encrypted app. So, they did this well after the world had learned

how to do encryption correctly. So, they don't, as we've said a long time ago, nobody needs

another cypher. Nobody needs another hash. Those building blocks are in place. They've been time

and academically and in the wild tested. They work. They're solid. So, don't just go

gluing some together in some weird arrangement and say,

we dare you to break it. And of course, the fact that they've offered a large cash prize to anyone

who could break it does not change the fact that it's not based on any sound formal design

or tested cryptographic system. So, anyway, we're going to take a far closer look at Telegram at the

end of today's podcast, since as I said at the top of the show, Johns Hopkins cryptographer,

Matthew Green, just posted an intriguing,

unique piece titled, Is Telegram Really an Encrypted Messaging App? Okay, but be that as it

may, Telegram does offer one important feature that makes it unique among all of the private

messaging systems. Whereas Telegram, as I noted earlier, can comfortably provide privacy for

200,000 members of a large group, Apple's iMessage groups are limited to,

32 participants, Signal groups are limited to 1,000, and WhatsApp's variant of Signal

limits group size to 1024. It turns out that implementing true end-to-end encryption

across large groups with many participants is not trivial. But what much of the media misses

is that, as we'll see, Telegram doesn't actually do that.

The, you know, so their unique value proposition is to provide large groups with unmoderated

communication and certainly some degree of privacy. Telegram describes itself as a cloud-based

messenger that provides, quote, seamless sync, unquote, across devices. But to do that, it

needs to have access to the content of those messages. And we know that because Telegram

themselves can access the content of conversations. So it certainly could invest

in moderation if it chose to. It chooses not to. NBC News reported that child safety groups in the

U.S., U.K., and Canada all get short shrift from Telegram when they report CSAM on the platform.

And for example, this is in contrast to an app like Signal,

which also espouses and has the technology to actually enforce privacy-first values.

Signal's built its app so that its technology implements those values as much as possible

while still enforcing privacy. So although Signal collects no content from its users

and only minimal metadata about how they use the service, Signal is able to and will respond to

law enforcement requirements. And so, as we'll see, Telegram is able to do that.

But only to the extent of providing account creation dates and the date an account last

accessed Signal. This means that while Signal is not in practice a great deal more helpful than

Telegram, at least Signal is not openly combative and can honestly say that it has wholeheartedly

cooperated with court orders to the limit of its ability and technology. When Telegram says that,

it's not true.

What if Pavel Durov? This may just be a shot across the bow, and it might wind up being good

for Telegram's business model to see their founder and CEO being detained and tried for his refusal

to comply. Since Telegram currently has only 41 million users in the European Union, this falls

short of the 45 million user threshold that would subject it to the EU's Digital Services Act,

with Telegram not categorized as a very large online platform, it's not subject to the EU's

stricter transparency and content moderation rules. However, the Financial Times recently reported

that the EU is now investigating Telegram for misrepresenting the total number of EU users

in order to fall below that 45 million user threshold.

Now, right, who, well, I'm shocked. Last February's claim that they only have 41 million

users within the EU is going to be carefully examined for its veracity. Now, the one thing

that, like, this gives me the occasion, and I think it's important to observe before we move on,

and we'll be coming back to what Matthew Green said in a minute,

is that both of today's

major mobile platforms, iOS and Android, manage their client apps with an iron grip.

They do this to enforce both security and control over these client apps. And we spent a lot of time

through the years talking about all of the ins and outs and mechanisms for this.

So, for example, you know, something close to home here, the reason Spinrite boots directly

over its users' hardware.

And brings along its own minimal OS is because it cannot obtain the direct hardware access it

requires from within any operating system environment. But nothing like that exists

for our mobile operating systems. None of the various messaging platforms are able to obtain

anything approaching direct access to the platform's underlying hardware.

So, we should always,

be mindful of the fact that the OS runs the camera, runs the screen, and runs the virtual

keyboard. And that access to those resources is granted to the applications that are running.

That's why we're able to seamlessly switch among applications without any application being able

to prevent that. Apps are powerless clients of their mobile platform OS.

So,

a messaging app, such as Signal, WhatsApp, Telegram, or iMessage, may be as clever as it

wishes with how the content that it communicates is encrypted and decrypted. But everything that

is eventually communicated to and from its users passes through the control of the OS.

And that OS is always able to see everything that's happening in the clear without any form

of obfuscation or encryption. I think we need to hold on to that because it's easy

to get focused on the ins and outs and specifics of any given messenger. But our mobile OSs have an

iron grip.

Over all of these messaging apps and what the user sees, the decrypted content coming out of the

app and onto the device's UI surface and going in, the OS is the one that has unfettered,

unencrypted access to that. So, there's a bit of a game of hot potato going on here

with no one wanting to take responsibility,

for the content that's passing through their control before it's encrypted or after it's

decrypted. But the truth is that the vendor of the underlying platform, Apple or the supplier

of an Android OS, is in the unique position to monitor what's going on before it's turned over

to any messaging app and to similarly inspect what its client apps decrypt before it's presented to

the user.

Now, we know, we've talked about this too, I mean, this is a difficult subject.

We know how adamantly the platform vendors want to stay as far away as possible from taking

any responsibility for what their users and their client apps do. And I know that we all

want to retain the total privacy that we're currently being assured we're receiving. But

Pavel Durov's arrest and indictment by French authorities shows us that we should probably

regard the privacy we're enjoying today as fleeting, since no government wants to be

completely blind to the conduct of its citizenry. Okay, so to set the stage for some news,

recall that five months ago, last April, the U.S. Cyber Safety Review Board released a rather

scathing report, which squarely placed the blame on Microsoft for the nation-state-linked intrusion

into Microsoft Exchange Online, which led to the theft of about 60,000 U.S. State Department emails

that previous summer. The CSRB report stated that the breach, quote,

was preventable and should never have occurred, unquote. The report elaborated that a series of

operational and strategic measures were being taken to prevent the breach from occurring.

The report elaborated that a series of operational and strategic measures were being taken to prevent

the breach from occurring. The report elaborated that a series of operational and strategic measures

by Microsoft pointed to a corporate culture that deprioritized investments in enterprise security

and rigorous risk management, despite the central role the company plays in the larger technology

ecosystem. The CSRB urged Microsoft to publicly share its plans to make fundamental security-focused

reforms across the company and its suite of products. The board also recommended that all

cloud service providers, including Microsoft, should be able to share their plans with Microsoft

and government partners to enact security-focused changes.

Okay, so among the criticism that was heaped upon Microsoft last year was that it was charging,

you know, extra money for zero-cost features such as security logging that would have gone a long way

had more of the government entities been using them to help detect the early states of the various

intrusions its users and customers have been experiencing. The tech-savvy Senator Ron Wyden

said at the time, quote, unfortunately, as Microsoft's $15 billion plus cybersecurity

business grows, Microsoft's incentives are not to deliver secure operating systems and cloud

software to its customers, but to deliver insecure products and upsell them.

On cybersecurity add-ons, it should not have taken multiple disastrous hacks of federal systems

for Microsoft to make essential security features standard for government customers,

but better late than never, unquote. So we're talking about this today now,

because one year later, evidence is emerging of the effect, the beneficial effect of something

as simple as free security.

Last Tuesday, the publication Cybersecurity Dive posted a report titled CISA Officials

Credit Microsoft Security Log Expansion for Improved Threat Visibility. They wrote,

greater access to Microsoft event logs is paying off for U.S. government agencies and critical

infrastructure providers, which have gained greater visibility and greater visibility of

cybersecurity into their network environments, the cybersecurity infrastructure security agency

said Saturday. You know, CISA. Microsoft expanded free access to security logs in 2023 after a state

linked threat actor stole thousands of emails from the State Department after gaining access

to Microsoft Exchange Online. Jeff Green, CISA's Executive Assistant Director for Cybersecurity,

confirmed via email, quote, yes,

Microsoft has expanded access to the logging elements that were used by the State Department

to detect the 2023 compromise to a vastly larger set of customers, including all federal agencies

and numerous critical infrastructure organizations. These new logs are being used by organizations

today to detect threats. Green added, CISA will continue to work with Microsoft and other

companies to ensure that their products are secure by design and that Microsoft lives up to the

commitments it has publicly announced around improving the security of its products following

the 2023 compromise. The win for the U.S. government comes as CISA, along with the FBI,

National Security Agency, and a group of foreign cybersecurity authorities led by Australia,

released a best practices guide for event logging last week. The new guide,

is part of an effort to combat sophisticated threat activity from state-linked threat groups

such as Volt Typhoon. The group uses living-off-the-land techniques to disguise its

threat activities using normal security tools that won't trigger alerts when moved around

computer networks. Security researchers at ReliaQuest have been tracking a ransomware

actor known as Medusa, which is also using living-off-the-land techniques in multiple

attacks. Alex?

Capraro, cyber intelligence analyst at ReliaQuest, said via email,

quote, by implementing the best practices for event logging and threat detection outlined in

this guide, organizations can enhance their ability to identify and mitigate malicious

activities, thereby protecting their network's devices and data from compromise. So, yay,

you know, it's unfortunate that Microsoft had to, you know, have so many problems and come

under so much pressure before it made something that costs it virtually nothing free because it

was making money from selling something that it's no longer making money from. But really, I mean,

anyone who's got any experience with IT security understands and has, I'm sure, used logs to find

out what's going on. I think the first instance of that is when I was at a conference where I was

at a conference where I was at a conference where I was at a conference where I was at a conference

where I saw it being, logging being used to a level that at the time I thought was a little

over the top was about 25 years ago, Mark Thompson, my friend whose site is AnalogX.com,

Leo and I know Mark, he's been a friend of ours for decades.

Is he still doing his AnalogX?

Yeah, he's still busy doing stuff, Leo. You know, Mark made a comment about, I think it was,

we were talking about something and he made a comment that,

he was logging something and I thought, I mean, it was like logging how long his toothbrush takes

to charge or something. I mean, it was like, what? But what do you know? Turns out that that was

useful somehow. And I know that over time, I've increased the amount of logging I'm doing. And

sure enough, some, I mean, I guess the point is,

you don't know what you don't know until you, you know, wish you knew. And if you've got,

if like, if everything is being logged, then you may have to do some log processing,

you know, and I roll logs over monthly and zip them down because they, logs tend to compress

massively down to things that are much smaller. But, you know, sure enough, I find myself going

back and looking through logs.

I find myself going back and looking through logs to, to obtain information that I wasn't

specifically thinking that I would need. But, you know, if you log it, it's going to be there when

you need it. So anyway, the idea, I'm not at all surprised that this is a benefit. And, you know,

you could argue 30 years ago that hard drives were expensive and, you know, you didn't want

to log everything as, oh, think of all the space it would take. Now, you know, hard drives,

data storage is just, it's free. So why not log? And on that note, Leo,

Log everything.

Log everything. Why not take a break to tell us about our sponsor? And then we're going to talk

about CrowdStrike and how they're doing with their customers.

Gladly, gladly, Steve. Our show today brought to you not by CrowdStrike,

although they were a sponsor for a long time. And I, and I thought the world of them, I hope they're

still doing it.

They do, Leo. I would not have left them after this.

Okay. There you go. There you go. That's a kind of a tepid endorsement.

That's an endorsement nonetheless. Our show today brought to you by ThreatLocker. Now,

this is a company you definitely want to keep. ThreatLocker. So here's the question. Do zero

day exploits and supply chain attacks keep you up at night? Worry no more. You can harden your

security with ThreatLocker. ThreatLocker is zero trust done.

Right. And you know, there, you may remember a few weeks ago that a number of airlines

struggled a little bit after a big security thing. Not JetBlue. JetBlue trusts ThreatLocker

to secure their data and keep their business operations flying high, which is a good thing

because I'm flying JetBlue a day after tomorrow. Imagine taking a proactive, this is zero trust,

a proactive deny by default approach to cybersecurity.

Blocking every action, every process, every user, unless authorized by your team.

That's security made simple. ThreatLocker lets you do this and, and just as important,

provides a full audit for every action that helps you with risk management and compliance.

They're 24 seven US based support team, fully supports your onboarding and beyond. So don't

worry about getting started or, or continuing. It's, it's easy as it can be. Stop the exploitation

of, of trusted applications within your organization and keep your business secure, protected from

ransomware court organizations across any business. Any industry can benefit from ThreatLocker's

ring fencing.

That's what they call it ring fencing by isolating critical and trusted applications from unintended

uses or weaponization and limiting attacker's lateral movement within their network. ThreatLocker's

ring fencing has a really great track record. They were able to FOIL a number of attacks

were not stopped by traditional EDR. You may remember us talking a few years ago about Solar

Winds Orion. That attack, the 2020 Solar Winds Orion attack, was foiled by ThreatLocker's ring

fencing. And yes, ThreatLocker works for Macs too. Get unprecedented visibility and control of your

cybersecurity quickly, easily, and cost-effectively. ThreatLocker's Zero Trust Endpoint Protection

platform offers a unified approach to protecting users, devices, and networks against the

exploitation of zero-day vulnerabilities. And that's got to be good. Get a free 30-day trial.

Learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance.

Visit ThreatLocker.com. It's a great name, isn't it, for a zero-trust solution? ThreatLocker.com.

Lock those threats up with ThreatLocker.

All right, Mr.

G.

G, on we go with the show.

So far, CrowdStrike reports that it expects to lose around $60 million in net new annual

recurring revenue and subscription revenue in the aftermath of its technical outage.

Now, I don't have a good sense for what that represents as a percentage of total revenue,

but it does not sound like much because CrowdStrike

is, you know, it's the big player in this EDR, the Endpoint Detection and Response Market.

So, you know, nevertheless, CrowdStrike is endeavoring, as you would expect them to,

to retain customers by offering various discounts. Their CEO, George Kurtz, denied rumors that the

company was losing customers to rivals. But of course, that will happen to some degree after

the so-called CrowdStrike outage, which is now the end of CrowdStrike.

Although, you know, as I said, I'm sure I would be staying unless I was in some way otherwise

unhappy because the changes they've made since have seemed solid. We know we talked about last

week how George went to the Pony Award and accepted like the biggest mistake ever in history

award, like in person.

Whereas other companies like Microsoft has just blown it off. You're not going to get

anyone from Microsoft there. So that was impressive. And I mean, a broken bone is

always stronger when it heals, right? They've really, yeah. I mean, you could have an employee

that screws up and like over and over and over and refused to learn a lesson, in which case,

okay, fine, we're going to have to let you go. But, you know, it's possible also to learn a lesson.

And I'm sure they've...

They've, you know, this really sunk in. You know, George said that he's putting that award

in the lobby so that all the employees have to look at it.

That's a good idea. That's a great idea.

When they come into work every morning, it's like...

We don't want another one of these, okay?

Yeah. Yeah. So for what it's worth, both SentinelOne and Palo Alto Networks have claimed

that they've been fielding calls from soon-to-be ex-CrowdStrike customers over the past few weeks.

Again, I don't doubt that.

For a moment. But to me, it doesn't seem like that many are leaving. And we actually...

I sent the email for this podcast out early today because I started working on it really

early yesterday and got it... And so got it finished earlier this morning than I normally do

and notified about 8,900 of our listeners of the contents. We have a listener who works at

CrowdStrike and he already sent me some feedback and said, for what it's worth, we're doing fine.

And so I'm sure they are. And on that note, interestingly, Microsoft will be meeting

privately with a bunch of these guys. Exactly one week from today, on September 10th, Microsoft

will host a Windows Endpoint Security Ecosystem Summit, their announcement said, at their Redmond,

Washington headquarters.

Their announcement said, Microsoft, CrowdStrike, and other key partners who deliver endpoint

security technologies will come together for discussions about improving resiliency

and protecting mutual customers' critical infrastructure. They said, our objective

is to discuss concrete steps we will all take to improve security and resiliency for our joint

customers. The CrowdStrike outage, this is Microsoft's phraseology,

in July 2024, presents important lessons for us to apply as an ecosystem. Our discussions will focus

on improving security and safe deployment practices, designing systems for resiliency,

and working together as a thriving community of partners. We're all happy here. To best serve

customers now and in the future. They finished saying, in addition to ecosystem partners,

Microsoft will invite government representatives to ensure

the highest level of transparency to the community's collaboration to deliver more secure

and reliable technology for all. It's expected that the Windows Endpoint Security Ecosystem Summit

will lead to next steps in both short and long-term actions and initiatives to pursue

with improved security and resilience as our collective goal. We will share further updates

on these conversations following the event. So,

I would imagine that the government representatives are invited as a means of showing that something is

being done to keep anything like this from ever happening again. And in other reporting about this,

I saw that Microsoft plans, not surprisingly, to discuss new ways of building these EDR products

so that they can still get their job done while relying more on safer user mode code and less on

proprietary kernel drivers.

That's the key, isn't it? Keep them out of ring zero.

Yeah.

Give them an API.

Yeah. And it's really, it's, well, okay. So, it's difficult to do. That is, essentially,

Microsoft would have to provide hooks all over the place, which the various EDR vendors now use. That

is, it's not just Microsoft. It's Microsoft. It's Microsoft. It's Microsoft. It's Microsoft. It's Microsoft.

They install a driver, and when the system's booting up, they go and hook a whole bunch of Microsoft's

APIs themselves. And by hook, I mean, essentially, the idea is that they re-vector the API service,

which the OS publishes, so that any client running on Windows actually calls into this

driver, the proprietary third-party driver, which examines the call, decides what it thinks

about it, and then, if it looks okay, forwards it to Windows, to the Windows kernel, where

it would have normally gone directly to.

So, it's a so-called, basically, it's a comprehensive filter, like wrapping around the Windows OS.

So, you know, Microsoft doesn't want to offer that.

I mean, and this is why it's been so limited so far.

There are some things, yes, that you can do.

You know, AV vendors have some hooks they can use, but nothing like the degree of low-level

access that is really necessary to monitor the behavior of things that are trying to

use Windows.

So, it's going to be, I mean, it's an interesting dance.

And, of course, Microsoft is marketing the crap out of their own solution, you know,

Windows Defender for Enterprise and everything, because it's like, well, if you just used

ours, you wouldn't have had a problem.

It's like, right, nor would we have had the functionality.

You know, we heard from many users who are using CrowdStrike who said, this thing saved

our bacon a number of times.

So, yeah, we weren't happy that we all had to get up at 1 a.m.

in the morning.

We worked all day and lost a day of productivity, but, you know, we're sticking with them.

So, anyway, it is certainly the case, and we expect it, right, that Microsoft would

be holding a meeting with the vendors and say, okay, what do we do about this?

And, of course, Microsoft had some responsibility, too.

Many people were saying, why didn't Windows, like, safe boot fix this?

Why wasn't it possible?

Why wasn't it possible to identify the source of the trouble and then say, okay, well, we're

going to bring you back up, but you're not going to have your EDR solution enabled until

you, you know, roll it back somehow.

So, Microsoft's resilience, you know, the core Windows resilience could have been much

higher than it actually turned out to be.

So, yeah, lots of things for everybody to fix.

I just want to note, in passing, that Yelp has filed an antitrust lawsuit.

It seems, Leo, that Google has reached the size that Microsoft once did back in those

days, and, you know, their behavior is being viewed as a little aggressive by an increasing

number of entities.

In this case, Yelp is alleging that Google has a monopoly over the search market, no

surprise there, which it is abusing to promote its own review business.

Which, of course, Yelp is a famous, yeah, famous reviewer.

Yelp's been, by the way, whining about this for decades.

They went to the EU.

That's one of the reasons the EU investigated Google in the first place.

So, it's nothing new.

It's interesting they're taking a direct approach now.

Yes.

And it's certainly true that, as we know, controlling search is an incredibly powerful

place to be, right?

We view the internet through what our chosen search engine is.

Reveals to us.

And I've spoken of her before.

My wonderful Luddite realtor friend thought that Google was the internet.

She, you know, she didn't understand, like, when she went to the Google, that she wasn't,

like, that it wasn't the internet.

Yeah.

It's like, that's, you know.

So, yeah.

She was actually more insightful than we might realize.

Yeah.

I tried.

Oh, no, Judy.

That's not the way.

Just you go away, Steve.

I know what I'm doing.

How do I find it?

If it's not on Google?

That's right.

Doesn't exist.

Good luck.

Yeah.

So, anyway, there, of course, there's a reason why SEO, you know, search engine optimization

is a booming business.

But it matters, you know, if you're on the first page of Google's results or the second

or where.

Okay.

So, everyone seems to be piling on Telegram this week.

And it's, you know, not as if they probably don't deserve more attention.

And in today's internet threat landscape.

You know, that's what's going to happen on any large, unmoderated social network platform,

which everyone assumes is somehow secure.

In this instance, the security firm Falcon Feeds has taken a deep look into the flourishing

business of DDoS for hire, or more specifically, DDoSAS, as it's formally called, DDoS as a

service.

So, add the S-A-A-S on the end, DDoS as a service.

In a posting last Thursday titled, DDoS as a service, the dominating phenomenon on Telegram,

Jacob Abram wrote, he said, in today's digital landscape, distributed denial of service

attacks have become one of the most powerful tools in a cyber criminals arsenal.

These attacks, often facilitated by DDoS as a service platform.

DDoS for hire services and botnet for hire networks can disrupt online services, extort

businesses, and even advance political agendas.

At falconfeeds.io, our latest research reveals a staggering 3,529 DDoS incidents occurred

in Europe during just the first half of 2024.

Making up.

6D, 6-0% of the total cyber attacks we analyzed.

The rise of DDoS as a service on platforms like Telegram is a significant contributor

to this alarming trend.

Telegram has emerged as a hotbed for cyber criminals looking to offer DDoS as a service.

On various Telegram channels and groups, vendors openly advertise a range of DDoS attack services

at different levels.

Telegram's encryption and anonymity features create an ideal environment for these illegal

activities to flourish unchecked.

Our research, they wrote, has identified over 140 Telegram channels and groups actively

offering these services.

With 80% of them.

This trend underscores the growing accessibility and anonymity of DDoS attacks, posing a significant

threat to businesses and individuals alike.

So-called basic attacks are available for as little as $10 per month.

And the power and cost scales upward, with more sophisticated attacks being prolonged,

with high intensity costing as much as thousands of dollars.

Price lists are often displayed on Telegram channels, with discounts available for repeat

customers or bulk orders.

Oh my God.

This availability and accessibility has turned DDoS into a commodity available to anyone

willing to pay.

And again, where are these services to be found?

On Telegram.

Okay, but wait, Steve.

No longer.

Even need the dark web.

Read that whole news story again, replacing Telegram with the internet and channels with

webpages.

It's the same story.

So, so I don't understand what the point is.

Yeah, you can also get all of that stuff on the internet.

Getting rid of Telegram won't solve that problem.

Well, you, I'm not aware of any website that you just go to on the internet.

Oh, we've shown them.

Well, that's.

That's the dark web, which is very, very difficult to get to.

You have to have tour.

You've got to have onion addresses.

I mean, so it's making this so easy that that's the problem.

Yes.

And you, as we know, ease of access, you know, really changes the whole threat landscape.

Yeah.

We'd like to keep attacks away from the unwashed masses.

We only want people who know what they're doing to it.

$10.

Leo, the bar, the bar of entry is really low.

Easy.

Yeah.

Okay.

So last Wednesday, Google announced that it would be increasing in some cases by as much

as a factor of five, the reward bounties it would be offering for the responsible disclosure

and discovery and then disclosure, you know, reporting to them privately of Chrome exploits

due to the increased difficulty, which is good news for everyone of exploiting Chrome.

Yeah.

So that's all good news for the world's number one web browser.

Google said they wrote time flies.

Believe it or not.

Chrome browser turns 16 this year, which means you and I have been doing this podcast really

old, longer than Chrome has been around.

We were three years into this before Chrome happened.

I should go back and find that episode where you do this story.

And now Google has announced.

It's going to release its own browser.

That would be interesting.

Yeah.

I mean, remember we were talking about IE six.

That's true.

That's a good point.

Beginning, you know, and like Firefox four or something.

We're almost as old as Google itself.

To be honest, we've been around a while.

I do remember when my, when I, when a friend of mine said, Hey, uh, cause we were all using

Alta Vista.

That was the best search engine that there was then.

Right.

And he said, Hey, some Stanford guys came up with something.

Check this out.

It's called, it's got a strange name.

It's Google.

It's like, what?

I remember when Dvorak would use that as a litmus test to see if you were really a geek.

He would say, eh, what search engine do you use?

And if you said excite or Alta Vista, you got, but if you said, or Yahoo, if you said

Google, he'd go, it doesn't work anymore.

No, no, no.

My realtor is using Google.

So it's the internet.

That's right.

Okay.

So, um, so 16 years.

Um, and their VRP, which is their vulnerability rewards program to their credit is turning

14.

So it only took them two years.

Google was two years old when they decided, you know, we should start rewarding people

for finding vulnerabilities in Chrome.

So that's good.

Um, uh, Google posted as Chrome has matured over these years, finding the most impactful

and exploitable bugs has become more challenging at the same time.

New features are frequently introducing, or I'm sorry, new features are frequently

introduced into Chrome that may result in new issues, which we also want to encourage

being reported.

Therefore it is time to evolve the Chrome VRP rewards and amounts to provide an improved

structure and clearer expectations for security researchers, reporting bugs to us and to

incentivize high quality reporting and deeper research for Chrome.

Chrome vulnerabilities, exploring them to their full impact and exploitability potential

in this blog post.

We'll explain how we've moved away from a single table of reward amounts for non-mitigated

bugs and separated out memory corruption issues from other classes of vulnerabilities.

This will allow us to better incentivize more impactful research in each area and also reward

for higher quality and more impactful reporting.

We now, I should mention that reading between the lines, what they're sort of saying is

we're willing to pay.

If you're willing to do more work after you find a problem.

In other words, a lot of people have been saying, Hey, look, I made Chrome crash pay

out.

And now Google is saying, well, okay, if you just make a crash, this is how much you

get.

But if you're willing to like go deeper and do more of our work for us post crash, then

we're willing to make it worth your time.

And that makes sense to me.

I mean, that's good because, uh, wait, do you hear what you can earn if you go all the

way here?

So they wrote, we've remodeled our reward structure for memory corruption vulnerabilities

into the following categories.

They've got four high quality.

Report with demonstration of remote code execution.

They said report clearly demonstrates remote code execution, such as through a functional

exploit.

And that's the big money or high quality report demonstrating controlled, right?

Where a report clearly demonstrates attacker controlled writing of arbitrary locations

and memory.

Third, high quality report of memory corruption.

Report of demonstrated memory corruption in Chrome that consists of all the characteristics

of a high quality report.

And finally, baseline is their minimum.

They said a report consisting of a stack trace and proof of concept displaying evidence that

memory corruption is triggerable and reachable in Chrome.

So right, different levels, you know, different bar settings that they're asking you to jump

over.

And they said, while the report, while the reward amounts for baseline reports of memory

corruption will remain consistent.

We have increased reward amounts in the other categories, meaning where you're willing to

go do more work and give us more with the goal of incentivizing deeper research into

the full consequences of a given issue, the highest potential reward amount.

For a single issue is now $250,000, a quarter million dollars.

That's enough to live on for a few months.

Yes, it is for a demonstrated remote code execution in a non sandboxed process.

If the RCE in a non sandbox process can be achieved without a renderer compromise, it

is eligible.

For an even higher reward to include the renderer RCE reward.

So you can get them both.

So I've got a link in the show notes.

I'm not going to go into any finer detail here, but anyone who's interested for in more

detail can follow the link.

It's into it's to Google's bug hunters posting.

And I think it's a good move and good news.

That since Chrome.

Is becoming more difficult to exploit, the payouts are increasing, you know, commensurately.

This may also be the first time.

And I really give them credit for this.

The first time I've ever anywhere seen a software publisher actually say they wrote this quote

at the same time, new features are frequently introduced into Chrome that may result in

new issues, which we.

Also want to encourage being reported anyway, you know, anyone who's been following this

podcast for more than a few months will think, yeah, of course, but you know, cause we talk

about this all the time.

Like Microsoft won't leave windows alone, so they're never getting the bugs fixed.

They're introducing as many every month as they're fixing.

So it's just rolling forward.

Yeah.

But for them to admit it, it's pretty big deal.

Yes.

Who's ever actually heard any publishers say that.

So.

Yeah.

Props to Google for that.

Yeah.

Okay.

Yikes.

Believe it or not, Leo, when I encountered this next bit of news, I thought I was experiencing

deja vu.

The summary was titled CMG's active listening, and it read after media companies and device

vendors spent a decade telling customers.

That microphones baked into their devices are not secretly recording audio, a leaked

pitch deck from the Cox media group.

CMG is advertising a new service that can show ads to users based on what they've said

near microphones.

Google kicked CMG from its advertising.

This advertising platform after 404 media acquired the slide deck and then asked Google

to comment.

Okay.

Now, when I read that it was ringing some bells, I went to GRC's security now page and

entered Cox media group into the search bar in the upper right of all of GRC's pages.

The first link and summary that appeared was from our podcast number nine hundred and seventy.

I'm sorry.

Nine hundred and fifty three.

That was the last podcast of last year, dated December 21st of 2023, and that podcast was

titled active listening after the news that CMG was reportedly do of what they were reportedly

doing and bragging about on their own Web page, which had the URL ending in active listening

and overview.

They took down the page, but not before the Internet Archive spiders found and archived

the page.

And that was Google's shortcut of the week, which is still pointing to the page in question.

So GRC dot SC slash nine five three.

And it's still every bit as unnerving as it was nine months ago.

The page starts out saying, imagine a world where you can read.

One where you know this, you know, the second someone in your area is concerned about mold

in their closet, where you have access to a list of leads who are unhappy with their

current contractor or who know who or know who's struggling to pick the perfect fine

dining restaurant to propose to their discerning future fiance.

Say, this is a world where no pre.

Purchase murmurs go unanalyzed and the whispers of consumers become a tool for you to target

retarget and conquer your local market.

It's not a far off fantasy.

It's active listening technology, and it enables you to unlock unmatched advertising efficiency

today so you can boast a bigger bottom line tomorrow.

Do we need a bigger vehicle?

I feel like my.

Lawyer is screwing me.

It's time for us to get serious about buying a house.

No matter what they're saying now, you can know and act lower down under the how we do

it.

They say, whether you're a scrappy startup or a fortune 500 active listing makes the

unreachable in reach CMG can customize your campaign to listen.

For any keywords and targets relevant to your business, here's how we do it.

We flesh out comprehensive buyer post personas by uploading past client data into the platform.

We identify top performing keywords relative to the type of customer you're looking for.

We set up tracking via pixels placed on your site so we can track your ROI in real time.

AI lets us know when and what to tune into our technology detects relevant conversations

via smartphones, smart TVs, and other devices.

As qualified consumers are detected a 360 analysis via AI on past behaviors of each

potential customer occurs with the audience information gathered and encrypted evergreen

audience.

We use the list to target your advertising via many different platforms and tactics,

including streaming TV, OTT, streaming audio, display ads, paid social media, YouTube, Google

slash Bing search, pay-per-click.

Our technology provides a process that makes it possible to know exactly when someone is

in the market for your services in real time.

And then now having a third person can tell what you're running today.

Territories are available in 10 or 20 mile radiuses, but customizations can be made for

regional, state, and national coverage.

And then in their own FAQ, incredibly, they actually ask and answer.

Question.

Is active listening 200 ways?

Answer. We know what you're thinking. Is this even legal? The short answer is yes. It is legal for phones and devices to listen to you.

And here they actually wrote the following. When a new app download or update prompts consumers with a multi-page terms of use agreement, somewhere in the fine print, active listening is often included.

Unbelievable. Question. How does active listening technology work? Answer. Our technology is on the cutting edge.

A voice data processing. We can identify buyers based on casual conversations in real time. It may seem like black magic, but it's not. It's AI.

The growing ability to access microphone data on devices like smartphones and tablets enables our technology partner to aggregate and analyze voice data during pre-purchase conversations.

Answer.

So what just happened to bring this back on our radar from nine months ago is that 404 media, that same group that had previously reported on CMG's web page, which was then quickly taken down, obtained the marketing pitch deck that is still nine months later being sent by CMG to prospective companies.

404 media forwarded the deck to Google, who then reportedly canceled.

They kicked CMG off its partner program in response. That, of course, was the right thing for Google to do. But how is it that a massive media group such as CMG is able to, with a straight face, say that consumers are permitting this, making it legal for them because, quote, somewhere in the fine print, this permission is being given?

Unbelievable.

Yeah, I feel like, you know, when this story first broke almost a year ago, we talked about it.

Oops.

I don't know what that's doing there.

Let's turn that off.

That's our discord doing their thing.

We kind of thought, well, this is probably an overstatement on behalf on the part of Cox Media Group.

You know, these guys are salesmen and saying, well, we.

I don't know what people are talking about.

Probably.

I mean, do you think that Amazon is sending the contents of Echo texts to CMG?

I don't think so.

Or maybe it's made.

Well, now we know that Amazon responds to keywords.

Yeah.

I mean, at least the the enable keyword.

Maybe there's maybe it's responding to a broader range of specific phrases.

I don't know.

But I don't know.

But I think it's completely possible to say that these guys are just salespeople overselling what they know, because I've yet to see evidence that they actually I mean, yeah, they probably could get stuff from smart TVs.

I doubt there's much Samsung won't sell, but I can't imagine that Amazon or Apple or Google are selling.

How is Apple?

No, there's just no.

I mean, maybe Android devices with.

Some app that has like been installed and asked for permission to access your microphone.

Yeah.

But, you know, when the microphone's accessed because the light lights up.

Well, I mean, we all know we're carrying microphones around, but they're absolutely it's kind of an unwritten law that you don't record everything and then send it to marketers.

If they get caught doing that, you know that those companies are going to be.

Well, and they're bragging about doing it.

So like how like.

I.

I don't know.

I mean, I think this is see, this is Cox Media Group over hyping their capabilities in order to make sales.

That's what I think, because I don't.

And maybe they're and maybe they're not vulnerable to being held accountable because they're not actually doing it.

So if someone says like, hey, well, you know, what is this?

It was like, oh, well, we're not really doing that.

We're just telling people we are.

Well, and maybe there are I mean, maybe there are a few devices that they are doing that with, but they're not doing it.

With the phone in your pocket, they're not doing it with your voice assistant.

I'm pretty sure I mean, if they are, that's a huge scandal, but I think it's more much more likely to get Cox Media Groups lying, to be honest with you, or not lying, overstating their capabilities.

How about that?

Yes, embellishing, embellishing.

I mean, what salesperson ever embellishes now?

Who's ever heard of that?

Nobody I know.

Would you like me to take a little break here, sir?

Yes.

Sir, I hope that would be good.

I'm going to I'm going to embellish my coffee.

That's why everybody hates salespeople, right?

We don't.

Lisa, who does all of our ad sales, is very quick to say, I am not a salesperson.

We are we're here to help you with your marketing.

We don't we're not salespeople.

And one of the things I think I'm very proud of is that we we pick partners that are good, good, good companies with great products.

So it's it's easy.

You know, it's not a sales.

It's just we tell you about we introduce them to you and you get to decide, like our sponsor for this segment of Security Now, Vanta.

You probably know Vanta, whether you're starting or scaling your company security program.

You know, the compliance is like job one, demonstrating top notch security practices and establishing trust with your customers is more important than ever.

Customers are going, well, you just heard it.

And that story is is Amazon listening to me?

This is why compliance has become a very.

Big deal, of course, is also the regulatory environment.

Vanta automates compliance for SOC two ISO 27001 and more, saving you time and money and of course, helping you build customer trust.

Now, with Vanta, you can streamline security reviews by automating questionnaires, demonstrating your security posture with a really nicely designed customer facing trust center.

And it's all powered by Vanta AI, which makes it easier for you.

Over seven thousand global companies.

Use Vanta, Atlassian, Flow Health, Quora.

They all use Vanta to manage risk and prove security in real time.

Maybe you should be using Vanta.

Get a thousand dollars off Vanta when you go to Vanta dot com slash security.

Now, that's V-A-N-T-A Vanta dot com slash security.

Now, one thousand dollars off if you sign up right now.

We thank Vanta so much for supporting the good work Steve does here.

And we thank you for supporting it by going to that Web site.

Vanta dot com slash security.

Now, thank you, Vanta.

All right, Steve, you're back.

You're on.

So last week's serious propeller cap, pure computer science nerd fest episode was every bit as much of a hit as I hoped it might be.

You know, it's fun thinking about new things, especially for this audience.

But I wanted to take a moment to acknowledge some of the feedback I received from a number of our more technical listeners who corrected.

Correctly observed that the three layers of bloom filtering I described last week could not always be guaranteed to be sufficient.

Those observations were correct.

My goal was to clearly establish the concepts involved to talk about bloom filter applications where the filters inherent tendency to produce false positives would and would not represent any trouble.

And then in cases where no false.

Positives could be tolerated to introduce the idea of successive layers of bloom filters, where the next successive layer of the cascade would capture and be trained on the set of pause of false positives, which had been generated by the previous layer.

So those who noted that the third layer might also produce some false positives were 100 percent correct.

And a fully.

Realized implementation of this system actually takes the form of a variable depth cascade where successively smaller layers continue to be added and trained until no misbehavior is observed when the entire corpus of unexpired certificates is fed down through the entire cascade.

Eventually, there will be nothing to.

Train the next layer on since not a single false positive will have managed to make its way all the way down through the cascade.

And I guess, you know, in retrospect, I could have explained that last week, but as it was, I felt like it was already a lot for our listeners to take in.

And also for the record, I used one megabit as the number of bits in the first bloom filter level, which would be addressed by 20 bits taken from.

You know, any candidate certificates hash purely for the sake of illustration, since that made it much easier to describe and visualize the actual size of the first filter and of each successive filter, as well as the number of bloom layer bits that will be set by the addition of each certificate are all well understood and are determined by a bunch of very fancy math.

But.

You know, that was technically irrelevant to our understanding of the overall concept of probabilistic bloom filtering, which we know and getting that across was the goal of last week's.

So anyway, definitely big props to our listeners who said, Steve, you do realize that three layers might not always do the job right.

And, you know, speaking of listeners and their feedback, I got an interesting piece of feedback.

We were talking a couple of weeks ago about the security company who discovered that they had inadvertently hired an actively hostile employee based in North Korea who'd gone to extreme measures to spoof their identity and set up a fake domestic operating location.

What happened to one of our listeners is a little different, but I think it's just worth.

Sharing it.

He wrote.

Hi, Steve.

I was interested in the story from SN 985 about North Korean hackers posing as U.S. workers and getting hired by American tech companies.

I'm currently between jobs and I got an email from someone claiming to be from Malaysia who found my profile on a job board.

This person is proposing a and he has in air quotes a collaboration.

Where in I get hired for a remote American tech job, then he impersonates me and does all the work.

I send 85 percent of the paycheck to him, pocketing the other 15 percent for myself.

I don't think he said I don't think anyone's ever approached me to ask for my participation in something so blatantly illegal before.

Though, if I'm being honest.

I was momentarily tempted since it would be easy money for me and he'd still be making more this way than he could working in his own country.

Sounds like a win win apart from the whole fraud thing and serious criminal and reputational liability for me.

He said, anyway, I never responded to the messages.

So I'm so I can only speculate.

But I wonder if this is actually how the situation with no before happened.

I have no reason to believe the sender of the email used his real name or that he's based in Malaysia.

It might be more plausible that this message is part of the sort of large campaign that uses an IT mule laptop farm, as described in the story.

His Gmail address is generic and formulaic enough that I suspect there are many other identities being controlled by the same party.

The message itself is so carefully wordsmithed that it doesn't.

Strike me as a personal note from a fellow dev.

I also received a follow up message a week later, which felt more automated than not.

He said, regardless, I thought you might be interested to see to see it since the public reads about the aftermath of these stories.

But they're on set usually happens behind closed doors, forwarding the full message here in case you'd like to read it on air.

Thanks.

Signed, Parker.

Okay, so.

Interesting and intriguing.

Indeed.

Here's the solicit.

The solicitation email that Parker received.

The subject was open to a collaboration.

And it says, Hi, Parker.

I hope you're doing well and don't mind me reaching out.

I'm Lucas, a full stack developer from Malaysia.

I found your profile on and this was in on on on use brain dot com slash talent.

He said.

And wanted to propose a collaboration.

I don't currently have any projects that need your help, but our collaboration could be either technical or non-technical for the non-technical aspect.

I'd like your help with entrepreneurial factors for my development work.

If we end up getting jobs together and working on them, it would be a technical collaboration to keep it short.

I'm looking to get well paid jobs with companies.

Companies or clients in the U.S.

While this is feasible from Malaysia, they tend to prefer hiring developers from similar time zones.

Unfortunately, I'm in GMT plus eight while the United States is in PT to ET, especially for full time jobs at companies.

They typically don't hire developers outside of the U.S.

So I believe the best way to get U.S.

jobs is to.

Impersonate someone who resides in the U.S.

It might sound risky, but it won't be risky as long as we keep this 100 percent confidential.

Besides, I don't mean that I want your identity.

I don't mean that I want your identity information.

He says.

Have you heard of up work dot com or or top tall?

They're the largest freelancing job markets in the world.

Where most individuals.

Individual clients in the U.S.

Look for developers for their projects.

There's no easy way to get well paid jobs and up work or top tall has a lot of competitive freelancers.

However, I'm very confident that I can get great jobs to make decent money.

Here's how it would work.

First, you open an up work or top tall account and log into it on your secondary laptop.

I connect.

To your secondary laptop via any desk app and I search for jobs you receive money into your bank account once I finish jobs with clients you take your commission and send me the remaining this would be a non technical collaboration and I would suggest a split a 15 to 20 percent for you and 80 to 85 for me for full time jobs at U.S.

companies, which.

Obviously makes us way more money than freelancing jobs I would apply for jobs on linked in and you would crack the interviews however I'd say this is the advanced step of a collaboration which should be based on a strong foundation of trust between us here's how that would work I apply for company jobs on linked in using your linked in account and get you scheduled with interviews you crack the interviews and get job offers.

interviews and get job offers.

You crack the interviews and get job offers.

I perform day to day work on those jobs while you attend the scrum meetings he says friends I can join the meetings if team members usually return off their cameras if you've ever done scrum that's more work than doing the coding exactly I would want to pay yeah I would want more money for that yeah I had the same thought and finally he says you get paid into your bank account by weekly or monthly and you send me my portion after deducting your commission this will be a mixture of text to text and text to text.

You crack the interviews and get job offers.

like the above because his blog is titled, Is Telegram an Encrypted App? He says, but as much

as I'd like to spend my time writing about exciting topics, sometimes the world requires

a bit of what Brad DeLong calls intellectual garbage pickup, namely correcting wrong or mostly

wrong ideas that spread unchecked across the internet. This post is inspired by the recent

and concerning news that Telegram CEO Pavel Durov has been arrested by French authorities for its

failure to sufficiently moderate content. While I don't know the details, the use of criminal

charges to coerce social media companies is a pretty worrying escalation, and I hope there's

more to the story. But this arrest is not what I want to talk about today. What I do want to talk

about is one specific detail.

Of the reporting, specifically the fact that nearly every news report about the arrest

refers to Telegram as an, quote, encrypted messaging app, unquote. This phrase, Matthew

writes, drives me nuts because in a very limited technical sense, it's not wrong. Yet in every

sense, it's not wrong. Yet in every sense, it's not wrong. Yet in every sense, it's not wrong. Yet

in every sense, it's not wrong. Yet in every sense, it's not wrong. Yet in every sense, it's not wrong.

And this misrepresentation is bad for both journalists and particularly for Telegram's

users, many of whom could be badly hurt as a result. So does Telegram have encryption

or doesn't it? Many systems use encryption, he writes, in some way or another. However,

when we talk about encryption in the media, we're not talking about encryption in the media.

In the context of modern private messaging services, the word typically has a very specific

meaning. It refers to the use of default end-to-end encryption to protect users' message

content. When used in an industry standard way, this feature ensures that every message will be

encrypted using encryption keys that are only known to the communicating parties and not to

the source. So when we talk about encryption in the media, we're not talking about encryption

from a service provider. From your perspective as a user, an encrypted messenger ensures that

each time you start a conversation, your messages will only be readable by the folks you intend

to speak with. If the operator of a messaging service tries to review the content of your

messages, all they'll see is useless encrypted junk. That same guarantee holds for anyone who

might hack into your messages. So if you're a messaging service provider, you're not talking

into the provider's servers, and also, for better or for worse, to law enforcement agencies that

serve providers with a subpoena. Telegram clearly fails to meet this stronger definition for a

simple reason. It does not end-to-end encrypt conversations by default. If you want to use

end-to-end encryption in Telegram, you must manually activate an optional end-to-end encryption

feature called secret chats for every single private conversation you want to have. The feature

is explicitly not turned on for the vast majority of conversations and is only available for one-on-one

conversations. So if you're a messaging service provider, you must manually activate an optional

and never for group chats with more than two people in them. As a kind of weird bonus, he says

activating end-to-end encryption in Telegram is oddly difficult for non-expert users to actually

do. For one thing, the button that activates Telegram's encryption feature is not visible

from the main conversation pane or from the home screen. So if you're a messaging service provider,

to find it in the iOS app, he says, I had to click at least four times, once to access the user's

profile, once to make a hidden menu pop up showing me the options, and a final time to confirm that I

wanted to use encryption. And even after this, I was not able to actually have an encrypted

conversation since secret chats,

which is the only way you can do this. So if you're a messaging service provider, you must

This is quite different from the experience of starting a new encrypted chat in an industry

standard modern messaging application, which simply requires you to open a new chat window.

Okay, now, I need to interrupt for a moment to clarify that I'm not a messaging service provider.

I need to clarify and explain something that's probably not clear. There's a world of difference

between a messaging app providing true end-to-end encryption and merely having

encrypted communications. Matthew doesn't bother to draw attention to this distinction

because he lives in the world of encryption, where the phrase end-to-end encryption has a

very specific meaning. And he's not a messaging service provider. So he's not a messaging service

provider. So he's not a messaging service provider. So he's not a messaging service provider. So he's not

a messaging service provider. So he's not a messaging service provider. So he's not a messaging service provider.

But it's easy to miss this important distinction. The reason iMessage imposes a 32-member limit on

group messaging, which I mentioned earlier, and Signal and WhatsApp both impose around 1K limits,

is that these services, which Matthew describes as industry standard modern messaging applications,

are all

actually encrypting every party's message end-to-end individually to every other party.

Telegram is incapable of doing this ever. It has no ability to do this under any circumstances.

So while it's true that Telegram's individual connections are always encrypted,

it's only when two and only two parties are simultaneously online and Telegram's users

opt to enable end-to-end encryption for that single, that two-party dialogue,

that any truly unobservable conversation ever takes place over Telegram.

All

large,

larger group chats are being decrypted by Telegram's servers for re-encryption and sending to other

Telegram users. Remember that Matt mentioned that industry standard modern messaging applications

never get the keys that are being used by end users to exchange messages. Telegram has all of the keys.

So, obviously, this is a crucial distinction.

Returning to Matthew's explanation, he says,

while it may seem like I'm being picky, quê

m y

ú

ly

in

isn't

interesting.

There is one problem with Telegram as a whole itself.

Each time someone is checking out a piece online, the services they receive is likely to be very additional, important, orshore or 것을 Delete them or read them.

Es aunma

ESSENCIAL

ESPERANCE

ES3

ESSI

The practical impact is that the vast majority of one-on-one Telegram conversations and literally every single group chat are visible on Telegram's servers, which can see and record the content of all messages sent between users.

That may or may not be a problem for every Telegram user, but it's certainly not something we'd advertise as particularly well encrypted.

He said, parens, if you're interested in the details, as well as a little bit of further criticism of Telegram's actual encryption protocols, I'll get into what we know about that further below.

He says, so does default encryption really matter?

Maybe yes, maybe no.

There are two different ways to think about this.

One is that Telegram's lack of default encryption is just fine for many people.

The reality is that many users don't choose Telegram for encrypted private messaging at all.

For plenty of people, Telegram is used more like a social media network than a private messenger.

And by the way, we talked about this ages ago.

That was exactly the conclusion we came to, was that Telegram is encrypted enough or is not encrypted at all.

But that's good enough.

I think that was actually the phrase you said, good enough messaging.

Right.

Yeah.

Right.

So people, as long as you know that and they don't advertise otherwise, that's fine.

But unfortunately, they imply that it is encrypted.

Yes.

And even to the point where Pavel, I don't think I have it in the show notes, but Pavel has actively attacked Signal and WhatsApp deriding their encryption.

He says, oh, the government has backdoors into those guys.

Well, the government doesn't need a backdoor at Signal.

Jeez.

Jeez Louise.

Yeah.

So he said Telegram also support.

Oh, I'm sorry.

He was talking about how they use it as a social media network more than a private messenger.

And he said, getting more specific, Telegram has two popular features that makes it ideal for this use case.

One of those is the ability to create and subscribe to channels.

Each of which.

Which works like a broadcast network where one person or a small number of people can push content out to millions of readers.

When you're broadcasting messages to thousands of strangers in public, maintaining the secrecy of your chat content isn't important.

He says Telegram also supports large group chats that could include thousands of users.

These groups can be made open for the general public to join.

Or they can be.

Or they can be set up as invite only.

He said, while I've never personally wanted to share a group chat with thousands of people, I'm told that many people enjoy this feature.

In the large and public instantiation, it also doesn't really matter that Telegram group chats are unencrypted.

After all, who cares about confidentiality if you're talking in the public square?

He says, but Telegram is not limited to just those features.

And many users who join for them will also do other things.

Imagine you're in a public square having a group conversation.

In that setting, there may be no expectation of strong privacy.

And so end-to-end encryption doesn't really matter to you.

But let's say that you and five friends step out of the square to have a side conversation.

Does that conversation deserve support?

Or does it need strong privacy?

It doesn't really matter what you want because Telegram won't provide it.

At least not with encryption that protects you from sharing your content with Telegram's servers.

Similarly, imagine you use Telegram for its social media-like features, meaning that you mainly consume content rather than producing it.

But one day your friend, who also uses Telegram for similar reasons, notices you're on the platform and decides,

she wants to send you a private message.

Are you concerned about privacy now?

And are you each going to manually turn on the secret chat feature,

even though it requires four explicit clicks through hidden menus,

and even though it will prevent you from communicating immediately if one of you is offline?

My strong suspicion, he writes,

is that many people who join Telegram for its social media features also end up using it to communicate.

And I think Telegram knows this, and tends to advertise itself as a secure messenger,

and talk about the platform's encryption features precisely because they know it makes people feel more comfortable.

But in practice, I also suspect that very few of those users are actually using Telegram's encryption.

Many of those users may not even realize they have to turn encryption on manually

and think,

I don't know, I don't know, I don't know.

I don't know.

They're already using it.

And this brings me to my next point.

Telegram knows its encryption is difficult to turn on,

and they continue to promote their product as a secure messenger.

Telegram's encryption has been subject to heavy criticism since at least 2016,

and possibly earlier, for many of the reasons I outlined in this post.

In fact, many of these criticisms were made by,

including myself,

in years-old conversations with Pavel Durov on Twitter.

And Leo, I'm going to inject something next,

but let's take our final break,

and then we're going to get into what Matthew thinks about the actual technology that Telegram has deployed.

Stay tuned for Steve's injection.

But first, a word up with our sponsor, Delete Me.

Oh, man.

We are so glad we used Delete Me after that big NPD breach you talked about last week.

Holy camoly.

If you've ever searched for your name online and didn't like what you saw,

well, join the club.

I don't recommend it.

It's eye-opening, and it's depressing.

Maintaining privacy is not just a personal concern.

It's a concern for your business, especially for your managers, your C-suite.

We had to use Delete Me for Lisa.

We wanted to.

Because Lisa, her personal information was being used to spearfish our employees,

their personal information, too.

But with Delete Me, it's all gone.

And Delete Me even has a family plan,

so you can ensure everyone in the family feels safe online, too.

Delete Me helps reduce the risk from identity threat, from cybersecurity threats,

from harassment, spear phishing.

It really made a big difference.

And I think every business needs Delete Me, at least for their management,

because that information just empowers a bad guy to go after you.

Delete Me's experts, when you sign up, will find and remove your information

from hundreds of data brokers like NPD.

There are hundreds out there, and there's more every day, sad to say,

because it's a very lucrative business, and it's absolutely legal.

With Delete Me, you can assign a unique data sheet to each family member

that's tailored to them, because you may say,

well, the Instagram is fine, but we want to know what's going on in Facebook,

things like that.

With easy-to-use controls, the account owner can manage privacy settings

for the whole family.

Now, once you sign up for Delete Me and that first removal happens,

and this is really important, because, you know,

the law requires these data brokers to have a form that says remove my data.

And if you knew all, you know, 400 of them,

and you could go to each one individually, you could do that.

The problem is they start repopulating.

It immediately, Delete Me continued.

First of all, not only do they know who to go to

and remove all that stuff to begin with,

they continue to scan and remove your information regularly.

And that's really, really important.

That's why you pay for Delete Me, frankly.

I mean, we're talking things.

Don't do this, but if you searched online for your name,

you'd find addresses, photos, emails, relatives, phone numbers,

social media, property value.

And if you were in the NPD breach,

and I think everybody in the United States,

was social security numbers, too.

This is terrible.

Protect yourself.

Reclaim your privacy.

Visit joindeleteme.com slash twit.

If you use the offer code TWIT, you'll get 20% off.

That's joindeleteme.com slash twit.

The offer code twit gets you 20% off.

It's a shame we have to do this,

but until Congress bans these data brokers,

this is just something we're going to have to live with.

Fortunately, we can, at least we can do this.

joindeleteme.com slash twit.

You don't really have to live with it.

Okay, Steve, time for my injection.

Okay.

It was an interjection, but yes.

Not an injection.

I'm going to interject here.

Okay.

To note that back in the morning of March 29th, 2015,

after Matthew first sat down,

to take a serious long look at Telegram's encryption protocol

and its system,

his tweet linked to Telegram's page.

I've got the link in the show notes for anyone who's interested.

And the Telegram page is titled creating an authorization key.

So he tweets, he tweets the link.

And then he says, like, seriously,

what the F is even,

what the F is even going on here.

Okay.

So this is a top cryptographer who understands this stuff,

who looks at Telegram's technical document on creating an authorization key

and is scratching his head.

Okay.

So he writes, although the interaction with Derov,

now he's speaking of the interactions that the security community includes,

including himself had sometime later that actually in 2016,

the next year, he said,

although the interaction with Derov could sometimes be harsh,

he said,

I still mostly assumed good faith from Telegram back in those days.

I believe that Telegram was busy growing their network and that in time they

would improve the quality and usability of the platforms.

And so he says,

I believe that Telegram was busy growing their network and that in time they would improve the quality and usability of the platforms and to end encryption.

And remember when he says that he means exactly that end to end.

And he said,

which,

which is to say that the platform never has the keys,

only the end points.

No,

the keys that are being used to encrypt and decrypt their conversation.

That's the key.

Telegram only offers that if you jump through hoops and it's never on by default,

there is no on.

So,

because of like the hoops you have to jump through.

So he said,

he's,

I believe Telegram is busy growing their network.

And then in time they would improve the quality and usability of the platforms and in encryption.

For example,

by activating it as a default or providing support for group chats and making it possible to start encrypted chats with offline users.

You know,

those,

those are all things we take for granted,

right?

In all the other state of the art platforms,

they,

they all do all of that.

He said,

I assumed that while Telegram might be a follower rather than a leader,

it would eventually reach feature parody with the encryption protocols offered by signal and WhatsApp.

Of course,

a second possibility was that Telegram would abandon encryption entirely and just focus on being a social media platform.

What's actually happened.

He wrote is a lot more confusing to me.

Of course,

he's being generous,

he said,

instead of improving the usability of Telegram's end to end encryption,

the owners of Telegram have more or less kept their encryption user experience unchanged since 2016.

While there have been a few upgrades to the underlying encryption algorithms used by the platform,

the user facing experience of secret chats in 2024 is almost identical to the one you'd have seen eight years ago.

This despite the fact that the number of Telegram users has grown by seven to nine times during the same time period.

At the same time,

Telegram CEO and sole owner Pavel Durov has continued to aggressively market Telegram as a secure messenger.

Most recently,

he issued a scathing.

Oh,

I do have in the show notes,

a scathing criticism of signal and WhatsApp.

On his personal Telegram channel implying that those systems were backdoored by the U S government and only Telegram's independent encryption protocols were really trustworthy.

Well,

you might argue the government couldn't understand them.

So maybe anyway,

he says,

while this might be a reasonable nerd argument,

if it takes,

if it was taking place between two platforms that both supported default end to end encryption,

Telegram really has no legs to stand on in this particular discussion.

Indeed,

it no longer feels amusing to see the Telegram organization urging people away from default encrypted messengers while refusing to implement essential features that would widely encrypt their own users messages.

In fact,

it's starting to feel a bit malicious.

So what about the boring encryption game?

Well,

you know,

I'm not a big fan of the boring encryption game.

Since this is a cryptography blog,

I'd be remiss if I didn't spend at least a little bit of time on the boring encryption protocols.

I'd also be missing a good opportunity to let my mouth gape open in amazement,

which is pretty much what happens every time I look at the internals of Telegram's encryption.

I'm going to handle this in one paragraph to reduce the pain.

And you can feel free to skip past it if you're not interested.

Okay,

now I am going to interrupt Matthew again to note that he has laced his description,

which I'm about to share with asterisks.

And later he explains that quote,

every place I put an asterisk in the paragraph is a point where expert cryptographers would in the context of something that's not a cryptography,

something like a professional security audit raise their hands and ask a lot of questions.

Okay,

so I'll just I'll say the asterisks as I'm sharing this.

And now you know that every time there's an asterisk,

this is Matthew saying,

uh,

what?

Okay,

so he writes,

according to what I think is the latest encryption spec,

Telegram secret chats feature is based on a custom protocol,

uh,

protocol called empty proto 2.0.

This system uses 2048 bit asterisk finite field Diffie-Hellman key agreement with group parameters.

I think he says chosen by the server asterisk since the Diffie-Hellman protocol is only executed interactively.

This is why secret chats cannot be set up.

When one user is offline asterisk MITM protection is handled by the end users who must compare key fingerprints.

There are some weird random nonces provided by the server,

which I don't fully understand the purpose of asterisk and that in the past used to actively make the key exchange totally insecure against a malicious server.

But this is long since been done.

Okay.

Asterisk,

the resulting keys are then used to power here,

comes the most amazing nonstandard,

authenticated encryption mode ever invented something called infinite garble extension.

I G E based on a yes and with S H a two handling authentication asterisk.

I mean,

this is an expensive way to use asterisk,

but you know,

I think this is not a way to use it.

That's the,

this is not something that a user can buy.

It's not true.

It's like an Apple that comes from a large store.

It doesn't come from a store that knows where it is.

It doesn't come from the company.

It's like a fake one.

you said infinite garble infinite infinite garble extension iggy honestly the more i've been thinking

about this the more i think this is actually malicious that they that this is not ignorance

they know exactly what they're doing yeah i think yeah and that's that's that is the point that

matthew's come to is that that though that they know what's going on pavel knows this is not

actually encrypted and and and i'm sure he's telling governments oh we can't get in we can't

moderate this is all super secure uh no anyway so anyway he says matthew says i'm not going to go

further than this suffice it to say the telegrams encryption is unusual

and i loved it he said the most amazing non-standard authenticated encryption mode

ever invented something called infinite garble extension right anyway he said if you ask me to

guess whether the protocol and implementation of telegram secret chats is secure i would say

quite possibly to be honest though it doesn't matter how secure something is if people are not

actually using it

so he says is there anything else to know yes unfortunately even though end-to-end encryption

is one of the best tools we've developed to prevent data compromise it is hardly the end

of the story one of the biggest privacy problems in messaging is the availability of loads of

metadata essentially data about who uses the service who they talk to and when they do that

talking that data is not

taken from the internet it's taken from the internet and it's taken from the internet

and it's taken from the internet and it's taken from the internet and it's taken from the internet

typically protected by end-to-end encryption even in applications that are broadcast only

such as telegrams channels there's plenty of useful metadata available about who is listening

to a broadcast that information alone is valuable to people as evidenced by the enormous amounts of

money that traditional broadcasters spend to collect it right now all of that information

likely exists on telegrams

telegrams servers where it's available to anyone who wants to collect it i'm not specifically

calling out telegram for this since the same problem exists with virtually every other social

media network and private messenger but it should be mentioned just to avoid leaving you with the

conclusion that encryption is all we need okay so there are many useful wonderful bits among what

matthew wrote one is that while telegrams crypto is bizarre on its face it's not obviously insecure

but neither has it ever been shown to be secure mostly it's just bizarre or as matthew put it

what the f the most important thing for telegrams users to appreciate is that what matthew referred

to as today's industry standard encrypted messaging apps provide always-on

end-to-end encryption by default while extending that true end-to-end encryption no matter how

many individuals are participating in chat groups and you know leo i didn't think of this when i was

putting this down on on paper yesterday but telegram is actually riding on the coattails

of the other messaging apps oh we do it too we're end to end see yeah yeah because

apple and signal and whatsapp have or have i have established the idea that everything is secure

because they actually are right telegrams just saying yeah we are too us too yeah you know we

do that that's what we that's what that's what messaging is not what encrypted not what they do

yep so uh also remember the last time we talked about iMessage and saw that not only had apple

implemented true

multi-party

end-to-end

encrypted messaging

but that iMessage is also offering

true forward secrecy

by periodically and continuously rolling its conversation keys

iMessage and signal offer technology that telegram has never had and as matthew noted shows no sign

of obtaining or even wanting it's pretty clear they don't they don't want

yeah right like well and look they've they've gone up by a factor of seven to nine i mean it's super

popular why why add why complicate that with additional technology it's like they don't need

more encryption they're they're able just to claim it and of course telegram's popularity

may not really be about true security right it's more about subscribing to its channels

with a weaker assumption that well

things are secure here only because telegram also has the unearned reputation of being a secure

messaging system so anyway um you know they're unable to to to offer what the other guys offer

with much smaller groups and it's a benefit that telegram is able to have these massive

hundreds of thousand subscriber broadcasts

they cannot make it

end-to-end encrypted so they don't yet they're they're getting the benefit of doing so

anyway matthew began as we know by posing the question is telegram an encrypted app the most

generous answer would be that while it can be forced to provide state-of-the-art end-to-end

encryption between two online parties it certainly is not as encrypted as the general public its

users and the press is not as encrypted as the general public its users and the press and the

press have all come to assume more than anything else its ability to broadcast to very large groups

has turned it into a social media platform with an air of undeserved security and privacy

so there you have it green for laying it out yeah i think that's uh i read it i read the

piece too i'm glad you uh you brought it back because it was very interesting uh and i thought

it was a pretty big takedown of it unfortunately the people most need to read it no a lot never know

this is just for our listeners yeah and even our listeners already know this because we've covered

this subject yeah before um i like telegram actually i don't i shouldn't maybe mention

this but right now we're streaming on seven streams as you know youtube and twitch linkedin

facebook twitter discord and kick

and i think we're going to replace kick with telegram because i you know in fact i when

tele i loved maybe eight years ago when it really took off i thought i want this everybody to use

this we should but i but we talked about this and it was as you said good enough it's not

encrypted but most of the time you don't expect that the standards have changed now thanks to

apple uh and google using rcs and google's case apple uses and leo encryption the podcast

your network podcast don't need encryption right we don't want them to be encrypted we want everybody

to see them yeah uh yeah so i think telegram i don't know you can't i don't know we'll see you

know what's cool though we have 764 people watching on those seven platforms right now

and i think that's a great way to introduce ourselves to a new audience if you like what

you hear you know thank steve gibson he's been doing this for almost a thousand years

episodes now 18 19 years closing in on it yeah and uh and and it's the best thing i tell you what

uh no one's been doing it longer more effectively spreading the word about security than this guy

right here we're very very grateful to him uh and i you're right i'd be really sad if we were

counting down if this was 10 9 a i'd be just terrible uh if you like the show support it

join the club club twitter twitter.tv slash club twitter

interact with steve buy a copy of spinrite that's his real bread and butter the world's

best mass storage performance enhancer maintenance utility recovery utility 6.1 is the current

version it's available from him directly at grc.com that's his website while you're at grc

of course you can find lots of great stuff steve gives away most of what he writes like a valid

drive which makes sure that you got the usb storage you thought you were buying from amazon

actually could be used anywhere uh shields up which i've been using for two decades to test my

routers before i go public in control if you don't want microsoft to upgrade windows out from under

you yeah and uh and that the new one which i can never remember the name of by your boot secure

boys boots is is boot secure anyway go to grc.com check it out while you're there you can get a copy

of the show he has the usual stuff

64 kilobit audio also two unique formats the 16 kilobit audio for the bandwidth impaired yes

that's right 16 if you want to know what the old days sounded like listen at 16 kilobits

it sounds like thomas edison mary had a little lamb uh he also has the text version of the show

the transcriptions written by elaine ferris who uses those 16 kilobit versions so that's why we

make them he has the show notes too and i think a lot of people like to get the show notes as

a great kind of pre-c of what we've talked about the pictures in there uh all the links and so

forth grc.com we have uh the show at our site 64 kilobit audio but we also have our unique format

video yeah we do a video version of this you don't need it but if you wanted it you can get it at

tv slash sn uh you can also uh uh get a copy from youtube actually if you go to tv slash sn there's

there uh so you can go right there that's a good way to share you know i know a lot of times people

hear something go oh i gotta i gotta get my boss this or oh i really should pass this along to my

uncle because he's he thinks telegrams all that so if you do hear something you want to clip

youtube's really good for just taking that little piece and sending it to him and who knows maybe

we'll create a new listener by doing that you can also subscribe in your favorite podcast player

we have links on the web page for those too that way you'll get it automatically without even

you have to click a link or anything uh you can also watch us live as i mentioned on all those

platforms as 760 people are doing right now uh the easiest way to do that is go to the web page

tv slash sn and you can see the live links um we do it about right after mac break weekly so the

time is a little you know fungible but uh usually around 2 p.m pacific 5 p.m eastern that's when we

started today 2100 utc of a tuesday

thank you steve have a great week thank you my friend i will not be here next week going on

vacation for a couple of weeks right michael take over he'll do a great job uh and i will see you

september 24th yep i already verify with micah that he receives the emails that i send so he'll have

the show notes for next week and the week after you're so we're we're missing you for two episodes

right two episodes i will be back uh on the 24th okay thanks steve have a great week see you next

month security

ai might be the most important new computer technology ever it's storming every industry

and literally billions of dollars are being invested so buckle up the problem is ai needs

a lot of speed and processing power so how do you compete without costs spiraling out of control

it's time to upgrade to the next generation of cloud

oracle cloud infrastructure or oci oci is a single platform for your infrastructure database

application development and ai needs oci has four to eight times the bandwidth of other clouds

offers one consistent price instead of variable regional pricing and of course nobody does data

better than oracle so now you can train your ai models at twice the speed and less than half the

cost of other clouds if you want to do more and spend less like uber 8x8 and databricks mosaic

take a free trial of the cloud infrastructure oracle cloud infrastructure or oci is a single

free test drive of oci at oracle.com security now that's oracle.com security now oracle.com

security now

Continue listening and achieve fluency faster with podcasts and the latest language learning research.