End-of-Year Cybersecurity Wrap: A Digital Review of 2023

In a rapidly evolving digital landscape, cybersecurity has become a paramount concern for governments, businesses, and individuals alike. As threats in the cyber realm continue to grow in sophistication and frequency, lawmakers around the world are taking action to fortify their digital defenses and protect their residents. 

This year was no different. We witnessed significant developments in cybersecurity legislation worldwide — from attempted TikTok bans and encryption wars to enhanced child protection laws and hidden censorship. We’ll delve into the most noteworthy legislation and new initiatives that came out in 2023, shedding light on the measures designed to address emerging challenges and secure the digital future.

TikTok Bans and Other Moves to Increase Censorship

End of TikTok in the US? It Just Might Be

You might recall the ever-so-famous TikTok hearing in Congress earlier this year, which generated numerous memes and jokes still in circulation today. The hearing came as a result of a proposed RESTRICT Act 2023, which put the short-form video platform at risk of being banned across the country. 

Though the bill doesn’t actually mention the platform by name, the RESTRICT Act aims to protect national security from platforms with links to foreign governments. This would allow the government to ban or restrict certain apps, services, software, or equipment created and sold by countries which could be spying on Americans — such as China, where TikTok’s parent company is incorporated. 

The new act states that for an app to be considered, it needs to have access to sensitive information of at least 1 million US users, so not every platform is at risk. However, other than that, the definition of what would constitute a “threatening” app is extremely vague, leaving a lot of space for interpretation. It could potentially let the government step in when not needed, potentially impeding your freedom of expression and increasing censorship.

This vagueness could mean future restrictions on VPNs, especially if they’re not owned by American companies. The RESTRICT Act mentions banning any service used to bypass the legislation and access apps already blocked in the country, which could include VPNs — bad news for privacy-conscious Americans. 

Until the Act is approved, VPNs remain the easiest way to access restricted platforms. You can even use a VPN free trial to see for yourself how easy it is to use on your devices.

The Controversy of the SREN Bill in France

As part of its implementation of the Digital Services Act (DSA), the French parliament created legislation to regulate and secure digital services, which also threatens to increase internet censorship in the country. The SREN Bill, or more specifically Article 6, Paragraphs II and III, propose forcing browsers and DNS providers to block websites the government deems to be illegal. If approved, the new law would undermine existing moderation standards, and potentially create an almost authoritarian approach to limiting online freedom.

You may think it’s not immediately such a big deal since France isn’t known for censorship issues. However, if providers have to develop new technology to abide by the SREN requirements, it could encourage other, stricter governments to follow suit or adopt new tech, with far less favorable restrictions.

Another concern is how the new legislation could impact popular services in France. Some provisions in the SREN bill could seriously mess with the principles of sites like Wikipedia that rely on decentralized, collaborative editing, possibly leading to it being banned in the country. VPNs are also at risk as the bill would make connecting to an independent server an illegal activity. 

Due to these global concerns and worries over increased government control, many online services reached out to the French parliament. The Mozilla Foundation even started a petition to prevent France from putting the SREN bill into action.

EU Moves to Control AI

Italy’s ban on ChatGPT at the end of March 2023 prompted EU members to take a closer look at the AI platform. We didn’t have to wait long to see results and, in April, we got announcements of the EU’s task force on ChatGPT and AI Act proposal. These aren’t fully set in stone as of yet, but both offer promising guidelines for the future of AI use. 

The dedicated ChatGPT task force is set to foster cooperation and exchange of information between EU countries on possible enforcement actions against the platform. It’s set to improve cooperation and information exchange on how EU member countries can monitor and enforce checks on the AI platform. It’s important to note the member countries don’t seek to punish or control ChatGPT, but are aiming to create policies to increase transparency. 

Spain quickly followed the EU’s decision and created its own AI task force — The Statue of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA” for short). It’s set to officially start working in December 2023 in preparation for the EU’s AI Act. We’ll likely see more AI legislation and big cybersecurity decisions come from Spain in 2024. 

The European Parliament also revealed its proposal for the Artificial Intelligence Act this year, making it the first comprehensive AI rulebook in the world. Some of the most important new laws include a ban on using AI to take people’s photos through security cameras, sorting people based on things like race or religion, and using AI to coerce people into doing things they don’t want to do. 

The initial proposal needed further refinement as initially, some parts were rather controversial, like using AI to recognize people’s faces in real-time. Some countries also argued they would want some freedom to allow AI companies to make their own rules, which the new bill didn’t include at first. The Act was finally approved in mid-December, so we’ll have to wait to see what comes out of it next year.

Protecting Children Online: The Who, What, and How

Significant Changes in the US

The White House has made significant changes to cybersecurity laws this year, including a couple of new orders that address the digital well-being of young internet users. Some of it was good, and some… not so much. 

Let’s start with the Task Force on Kids Online Health & Safety. The new task force is responsible for investigating the benefits and risks of internet platforms like social media, and their impact on children and teenagers. It’ll provide new resources, advice on best practices, and policy suggestions on internet use in elementary and secondary schools sometime in Spring 2024. 

Unlike other legislation on the list, this one was met with unanimous praise. Experts applaud the changes, hoping they’ll bring more positive outcomes for children online. A few voices question whether the new policies will result only in guidelines and not concrete steps taken to protect young people. We’ll have to wait and see what the task force comes up with in the future.

Then, there were new revisions to the Kids Online Safety Act, which put it back on the table for approval. The suggested laws push a bunch of extra guidelines aimed at protecting children who use social media platforms. They include safeguarding steps for minors like age verification, and offering tools for parents and guardians to restrict their children’s accounts, control privacy settings, and change available features. 

Many criticize the harsh requirements the new act would introduce as it could hinder teens’ and adults’ right to privacy and autonomy. Anyone signing up for social media would have to provide a form of ID, and those under 18 would also have to give their parents full access to their account. This could put particularly vulnerable adolescents, like LGBTQ+ minors, at risk of disapproval or even abuse from guardians who don’t support their choices. 

French Updates to Child Protection Laws

The French government introduced its version of a very similar legislation with Law No. 2023-566. Under the new law, those younger than 15 can’t sign up for social platforms without the green light from parents. These changes are supposed to help combat the ever-growing problem of cyberbullying and minimize the negative effect social media can have on minors. It’s also part of a bigger movement aimed at reducing children’s screen time.

According to the new legislation, parents will also be able to request an account suspension for their young children, and sites will be required to offer a range of new tools to manage young accounts, their screen time, and more. Social platforms that don’t obey the law will risk a fine of up to 1% of global revenue. 

As for when all this is happening, there’s no set date just yet. The European Commission is yet to run its compliance checks with EU laws before France can begin rolling the new rules out. Once this happens, social media sites will still have a year to apply the age requirement policy to new users and 2 years to check all existing members comply with it.

The Act of All Acts in the UK

Back in 2019, many Prime Ministers ago, the UK parliament came up with an idea to regulate social media and all the hate speech and bullying that kept happening online. However, at that time, no one had the means to actually implement this. Fast forward to 2023 and the introduction (and approval) of the Online Safety Act (OSA) 2023 — potentially the most controversial bill published in Europe this year. 

Promising to make the UK “the safest place to be online,” OSA rolls out new requirements for online services to protect children from harmful and illegal material. Introduced rules include age verification on porn sites, regular content moderation, an end to scam ads, and a block on all terrorism-related posts. OSA also created new digital offenses, such as cyber-flashing (sending unsolicited nudes) and spreading deepfake pornography content created by AI. 

According to the act’s definition of affected services, we can expect to see around 100,000 businesses scrambling to abide by the proposed rules. If they don’t, OSA offers several enforcement methods and power mechanisms for appropriate punishment. This could be huge fines of up to $20 million or 10% of annual revenue, service restrictions, and inspections. In extreme cases, bosses of these companies could face prison time for up to 2 years. 

Initially, it doesn’t look like the Online Safety Act is anything remotely bad — in fact, internet platforms probably needed some stricter regulations. However, the new legislation could be used to force communication services, like WhatsApp and Signal, to read encrypted messages you send and scan them for signs of child abuse. This significantly threatens your privacy and impedes your right to freedom of expression.

As a result, WhatsApp and Signal openly opposed the act and announced they’d stop providing their services in the UK. Other apps, like Proton, said they’d be happy to fight the government in court if they were asked to hand over decrypted user files. Luckily, such extreme actions may not be necessary as Ofcom promised not to request data until it becomes possible to scan files without reversing the encryption.

Canadian Content Waiting to Go Viral

The Canadian government recently approved two bills aimed at putting local artists and content on the map. It all sounds rather reasonable — until you look deeper into the consequences the bills already brought with them.

First came Bill C-18, or the Online News Act, which regulates how social media sites and search engines distribute news links. The act requires popular websites, like Google and Meta, to enter agreements with Canadian news publishers and compensate them for sharing their articles. The government hopes to mitigate current news ad revenue losses and declining subscriptions which have been happening for the past few years. 

Though the incentive behind it seems harmless, it put Canada at war with Meta and Google, which are the only companies encompassed by the law. The two digital giants refused to abide by the legislation as it placed no financial cap on how high their new fees could be. Since then, Google reached an agreement with the government, deciding to pay C$100 million to Canadian news publishers every year. This means you’ll still get your news through the search engine.

Meta, however, doesn’t want to budge — and neither does Justin Trudeau. As a result, the company outright blocked Canadian news on Facebook, Instagram, and Threads. It doesn’t look like either side will compromise anytime soon, but it’ll be interesting to watch this new dynamic unfold.

After publishing C-18, the Canadian government also announced Bill C-11, also known as the Online Streaming Act. The bill is also set to prioritize Canadian creators, but in the digital streaming space. This means radio services, TV channels, and websites and streaming services like YouTube, Spotify, and Netflix will have to promote local content above foreign content on their platforms. Failure to do so could result in hefty fines.

The new law seems to potentially impose censorship, letting the government decide what users view online. Streaming platforms won’t be able to use your personal preferences, interests, and viewing history as they currently do, but will instead present you with strictly Canadian or CRTC-approved content. There are no set laws on what constitutes approved content either — in the past shows about Donald Trump were approved while Turning Red and The Handmaid’s Tale weren’t.

Local artists and content creators also worry the new bill will make it harder for them to gain traction online. If big streaming services are forced to show specific content, smaller artists face the risk of being buried under forced recommendations. At the same time, bigger creators might be locked away from larger audiences as their content will become exclusive to Canada only.

So… Did the EU Increase Media Freedom or Not?

In light of huge concerns over how much influence the governments of Poland, Hungary, and Slovenia had on media, the EU created the latest European Media Freedom Act (EMFA). The main aim was to foster media plurality, increase transparency, and protect journalists — but the legislation comes with major flaws and highly intrusive laws.

The main concern is that EMFA may create a special class of media providers on large platforms, putting them in a privileged position where their content cannot be removed. They would be able to post whatever content they wanted with minimal supervision. If a provider wants to do this, they have to self-declare their status and change their editorial standards. This puts platforms in a tricky spot as they have to decide who gets special treatment and who doesn’t. 

Part of the EMFA enforces a 24-hour content moderation exemption, essentially forcing platforms to host their content even if it goes against their guidelines. This “must carry” rule may undermine equality of speech, promote disinformation, and put marginalized groups at risk. It also creates questions about the government interference in editorial decisions.

Data Protection and Cyberattack Prevention Hopes

Biden’s National Cybersecurity Strategy 2023

The Biden-Harris administration came up with the National Cybersecurity Strategy earlier this year after having to deal with the SolarWinds data breach soon after taking office. The strategy is meant to enforce US cybersecurity by building resilience to cyberattacks and forging partnerships with other countries. 

It sets strict cybersecurity rules, aiming to deter hackers by increasing the risks and costs associated with cyberattacks. The strategy also prioritizes enhancing collaboration between government and private sector entities, recognizing the importance of shared knowledge and resources in strengthening cybersecurity defenses. Additionally, it offers incentives for companies to improve their cybersecurity measures.

The National Cybersecurity Strategy commits to investing in technological advancements and preparation for future challenges, promising continuous adaptation to evolving threats. A key component involves forming international partnerships and creating a united front against global cyber threats. This approach aims to make the internet safer and ensures a coordinated defense effort between government and private sectors against sophisticated cyber attacks.

While the strategy’s goal is to tackle malicious actors, people are worried about how the executive order defines ‘public safety’ in the context of cybersecurity. This becomes relevant when dealing with misinformation campaigns on social media, which can manipulate public opinion. Finding the right balance between addressing disinformation and safeguarding freedom of speech presents a difficult challenge.

The End of Selling and Buying Data May Be Near

Massachusetts might become the first state to ban the sale of phone location data in the proposed Location Shield Act. This came after revelations government agencies can bypass the Fourth Amendment and buy your phone data from third-party vendors with little legal hindrance. Doing so heavily imposes on your digital freedom, highlighting the need for comprehensive privacy regulations and the growing awareness of digital surveillance’s implications on civil liberties.

In a similar vein, lawmakers have introduced the federal “Fourth Amendment is Not for Sale” Act. This seeks to prevent government agencies from obtaining internet data without judicial oversight. If approved, it would address the current ease with which data brokers can transfer personal information to interested parties, especially if they’re paying.

The proposed legislation requires government agencies to get a court’s permission to search through your data, similar to a search warrant for a residence. It also stops law enforcement and intelligence agencies from buying data about people in the U.S. or American citizens abroad if it was sourced through deceit, like hacking, contract violations, or privacy policy breaches.

These new legislative efforts sparked concerns among government agencies about how it could limit their investigative powers. Law enforcement authorities voiced their worries, stressing the importance of data access in investigating serious crimes like murder, terrorism, and kidnapping. Their feedback highlights the ongoing struggle to balance privacy needs with law enforcement requirements in the digital era.

Know Your IoT with the Cyber Trust Mark

The Cyber Trust Mark is a special cybersecurity certification program for companies that sell Internet of Things devices, such as smart TVs, home assistants, thermostats, and doorbells. The Cyber Trust label informs you about the device’s security system and its manufacturer’s accountability, data privacy, and vulnerability management. In other words, if a device you want has the Cyber Trust Mark, it has the necessary technology to protect your data from attacks.

Participation in the program is entirely voluntary, but it’s in the company’s best interest to take part. Experts predict those who offer it will see an increase in sales and customer retention. Amazon, Google, Logitech, Samsung, LG, and Best Buy are among the first to have already signed up. Expect to see the Cyber Trust Mark label sometime in early 2024.

New Updates to the Digital Services Act (DSA)

The DSA isn’t a new legislation — it came out in 2020, but it regularly receives new updates. This year, the EU published the first 18 online platforms which will have to comply with the legislation rules as soon as possible. These include fan-favorites Alibaba AliExpress, Amazon, Apple, Booking.com, Google, Instagram, Facebook, LinkedIn, Pinterest, Snapchat, TikTok, X, Wikipedia, YouTube, and Zalando.

We didn’t have to wait too long for someone not to fully comply, though. The EU officially opened its first investigation under the DSA legislation against X. The move came after European commissioners noticed a lot of misinformation being spread on the platform without any moderation. In response, X argued it’s committed to following the DSA rules, but it’s also dedicated to facilitating freedom of expression. It’ll be interesting to see how it all plays out.

UK’s Data Protection and Digital Information Bill

Brexit prompted the UK government to create its version of GDPR — called UK GDPR — which was put into action in 2020. However, the parliament recently recognized the data protection laws are long overdue for an update. This led to the proposal of a new version of the UK Data Protection and Digital Information Bill, which is still in draft form. 

The new legislation aims to encourage further research and innovation, cut operational costs for UK businesses, and improve AI technologies — all while ensuring proper data protection. Changes suggested in the bill include revised rules for cookies, updates to direct marketing, and altered guidelines for international data transfers and processing data for scientific research. 

Another major thing the bill would introduce is changes to existing SARs (subject access requests) rules. These give you the right to request copies of any data a business holds about you. While EU’s GDPR lets you ask to see what data companies collected, it also made it easy for corporations to refuse. The new bill hopes to reduce the threshold for such refusals, making your data more accessible to you. 

However, it’s not all so good. The bill is under fire for potentially expanding the use of AI and other automated decision-making processes. This would make it tricky to fully know when they are used or what for. It also offers very vague exemptions for reusing collected data in situations beneficial for “national security” or “crime prevention,” which may increase digital surveillance.

Potential End of Internet Hate Speech in Germany

Back in April, Germany’s Federal Minister of Justice shared his proposal for tackling online hate speech. The main aim? Suspending or blocking hostile accounts — which sounds good in theory. 

The catch is that this plan puts most of the heavy lifting on social media platforms. They’re the ones who have to spot these hate accounts and take action. While the goal is to stop hate speech, some worry this could mess with our freedom of speech. To ease those concerns, the plan suggests that social media sites should at least let users know when they’re about to get the boot and give them a chance to plead their case.

It also poses a big privacy question. The plan says that if you’re a victim of hate, you should have the option to find out who’s behind the mean words. That means social media sites might have to spill the beans on the IP addresses of users hiding behind fake names. But wait, that could mess with everyone’s right to stay anonymous online. And protecting online anonymity is a part of Germany’s coalition agreement, so that’s a bit of a sticky situation.

The proposal came out in April, and since then, we haven’t heard much more about it. So, we’ll have to wait and see how the German government deals with all these tricky issues when it comes to fighting hate speech on the internet.

Wrapping Up and Reflecting on the Impact of 2023 Cybersecurity Legislation

The past year has seen a wide range of cybersecurity legislation introduced to the public — some promising heightened protection and data security, others threatening to increase surveillance. While the US focused its efforts on preventing unsolicited data sales and banning TikTok, the EU created a first-ever task force to manage AI. At the same time, France heavily zeroed down on child protection, alongside Germany where hate speech might soon be a thing of the past. 

Looking ahead into 2024, it’s obvious cybersecurity will and should remain a top priority for policymakers. As our reliance on technology and its advancements grows, it’s more important than ever to update existing legislation and write new laws to protect the digital space and user data. Organizations and individuals alike must stay informed and adaptable to navigate the changing landscape of cyber threats and compliance requirements.

Leave a comment

Write a comment

Your email address will not be published. Required fields are marked*