Zum Inhalt springen

MLex

Bereiten Sie sich schon heute auf die gesetzlichen Änderungen von morgen vor.

Aktuelle globale News & Analysen zu regulatorischen Pflichten

MLex ist ein führender unabhängiger Nachrichtenanbieter für globale Einblicke, Analysen und Berichte zu 11 Fachgebieten. Ein weltweites Netzwerk an Fachjournalist:innen und Redakteur:innen sorgt dafür, dass Sie als Erste:r auf Chancen und Bedrohungen für Ihr Unternehmen und Ihre Kund:innen reagieren können.

Dank personalisierter Benachrichtigungen, thematischer Suchfilter und der Verfügbarkeit detaillierter Dossiers für bestimmte Fälle sind Sie mit MLex den Vorschriften immer einen Schritt voraus.

2005 wurde MLex als unabhängige Nachrichtenorganisation gegründet und ist ab sofort auch in Österreich verfügbar.

Leseprobe

13 June 2024
By Luca Bertuzzi

With this week’s European elections delivering a political shift to the right, the European Commission is expected to shift its priority to implementing the raft of digital policies that have been rolled out over the past five years.

Ensuring the EU’s landmark AI Act, which regulates artificial intelligence, is put in place will be one of its biggest concerns, as the flagship legislation is set to hit the books next month. The law took almost three years of intense negotiations and will fully start to bite by August 2026. Yet there are still several bumps ahead on the legislative path that need to be dealt with.

The shift to focusing on implementation and enforcement might be in tune with the strengthening of right-wing and far-right delegations in the European Parliament. However, for mid-level EU technocrats looking for a promotion to lawmakers wanting campaign material, there will still be incentives to pass new laws, including some that will regulate AI in the workplace and copyright.

The rightward political shift will also certainly change the regulatory environment, increasing the risk that decisions are dictated by political pressure rather than objective factors. Initial enforcement actions under the Digital Services Act suggest that EU institutions could prioritize making a splash rather than building a solid case or developing a constructive relationship with the regulated companies. But when it comes to the AI Act, senior EU officials have repeatedly made clear that the law aims to set the international standard in this regulatory space, exploiting the first-mover advantage to replicate the success of the General Data Protection Regulation, which inspired data protection regimes worldwide.

All of this leaves the EU’s executive branch in the driver’s seat, overseeing the compliance of AI companies like Mistral, OpenAI and Anthropic that commercialize powerful models.

 

Unprecederolented 

Models like GPT-4, which underpin ChatGPT, are currently the most complex type of AI and can be adapted to a various applications. Some experts even consider that, rather than building AI products from scratch, models will become the foundation layer of most applications in the near future.

As a result, the commission will have to police some of the most sophisticated technologies of our time, with new generations released every few months. Regulatory capacity, as usual, will be limited.

Whether the AI Act will be deemed a success story will depend on how much the EU executive will manage to keep the pulse on the technological trends and market development.

The commission will review the lists of prohibited practices and high-risk applications annually and decide whether they need updating — a compelling and frightening task considering how AI technologies are such a moving target.

Luckily, the AI Act provides structures to address these challenges. A scientific panel will be set up to send qualified alerts when a systemic risk arises, an advisory forum will provide technical advice, and AI app providers should maintain a post-market monitoring system.

However, the regulator’s track record suggests a significant risk it won’t have the workforce necessary to meet legal deadlines, will do its own thing, and will dismiss external inputs.

 

Enforcement pitfalls

The commission has high hopes for the legislation, but some pitfalls to its enforcement remain. The EU executive might have to move fast and make bold regulatory decisions, something it is not used to doing. The AI Act follows the blueprint of traditional product safety legislation. But this doesn’t consider that AI applications can evolve so much that outcomes are sometimes impossible to predict, even for developers.

Another case in point is the AI Office, a new body established to implement the AI rulebook. Besides the rebranding, the office is little more than a repackaging of the previous directorate in charge of AI policy, including the same leadership. The commission will also have to shift its approach to digital policy with a more regulatory and enforcement-driven role. The AI Act, the Digital Markets Act, and the Digital Services Act — all flagship regulations — have set high expectations.

 

Unfinished business

There is also some unfinished business. Nicolas Schmit, the leading candidate of the Party of European Socialists and the second-largest political family in the European Parliament, has already made clear that he wants legislation to regulate AI in the workplace.

Schmit is the outgoing commissioner for social affairs, and his party is gunning to maintain that portfolio. In an external study commissioned on this matter last year, the EU executive inquired about using specific technologies like apps and wearables, potentially vulnerable worker categories, and the managerial functions supported by AI. The other open question for the next commission is likely to be the copyright issue in relation to training AI models. The AI Act does not necessarily modify the EU copyright regime but introduces some transparency obligations to help rightsholders enforce their rights.

Still, EU policymakers share the understanding that the current copyright rules are not fit for purpose regarding AI. However, there does not seem to be an appetite to reopen the Copyright Directive, one of the most controversial digital files Brussels has ever seen. Instead, the intent seems to be to revise the EU Database Directive instead, which could also work as the aim is to regulate the dataset that feeds into the AI models.

 

Market concentration

There is also the question of concentration in the AI market, which is garnering regulators’ attention in the EU, the UK and the US. Whether the commission will take a more muscular enforcement approach in this area will largely depend on who will take over from Margrethe Vestager, the EU competition chief for the past decade.

Vestager’s approach has been to issue hefty fines against Big Tech companies, but she has shied away from structural remedies or blocking mergers that proved highly consequential, like Google-Fitbit and Facebook-WhatsApp. She has been criticized for failing to move the dial in the concentration of digital markets. The DMA, a law setting a list of do’s and don’ts for Big Tech companies, is largely a recognition that traditional antitrust probes have proved inadequate in such fast-moving markets. The law aims to prevent digital giants from leveraging their dominance to gain prominence in adjacent markets. However, it failed to cover the cloud market, allowing hyper scalers to become the sole way for AI start-ups to train and distribute their models.

With the European Parliament shifting toward the right of the political spectrum, Vestager’s approach will come under pressure amid the drive to make competition a more subservient part of industrial policy in a broader bid toward European strategic autonomy.  Despite EU officials patting each other on the back for having set the golden standards for AI, the reality is much more complex and past experiences should call for caution.

The EU’s path to regulating artificial intelligence has only just started, but it’s going to be a long and winding one.

29 May 2024
By Khushita Vasant

The Federal Bureau of Investigation conducted an unannounced inspection at Cortland Management in Atlanta as part of a criminal antitrust investigation by the US Department of Justice into a conspiracy to artificially inflate rents for apartment units, MLex has learned. The raid took place on Wednesday, May 22 at Cortland’s headquarters in Atlanta, Georgia, it is understood.

„Agents were at that address conducting court authorized activity,” Tony Thomas, a spokesperson for the FBI in Atlanta, told MLex. “That is all we can say at this point.“ A spokesperson for the DOJ’s antitrust division declined to comment. Allison Worldwide, which handles press communications for Cortland, didn’t respond to an email seeking comment.

Cortland, which owns and manages apartment communities across the US, was founded in 2005 with a focus on multifamily development in Atlanta, according to the company’s website. It has regional offices in Charlotte, Dallas, Denver, Greenwich, Houston, Orlando, and Phoenix. The inspection comes as the DOJ’s antitrust division deepens its investigation into the rental housing market and the use of a price-setting software provided by RealPage, whose clients include some of the largest US residential real estate owners and management companies. According to a March report by Politico, the DOJ issued subpoenas earlier in the year on behalf of a federal grand jury in Washington.

RealPage is the developer of a technology platform that provides software for the multifamily rental housing markets. This includes revenue management software, or RMS, now called “AI Revenue Management” and previously known as YieldStar. Cortland is named among dozens of defendants in a putative class action in Tennessee alleging the use of RealPage’s software platform to coordinate and agree upon rental housing pricing and supply. The lawsuit says Cortland uses RealPage’s revenue management software to manage some or all of its more than 58,000 apartments nationally. The lawsuit accuses RealPage, Cortland and several other managers of large-scale multifamily residential apartment buildings of using RealPage’s property management software to fix, raise, stabilize, or maintain at artificially high levels the rental prices for multifamily residential real estate across the US.

RealPage is headquartered in Richardson, Texas, and provides software and services to managers of residential rental apartments, including the YieldStar/AI Revenue Management software. When it was acquired in December 2020 by Chicago-based private equity firm Thoma Bravo, LP for roughly $10.2 billion, RealPage had over 31,700 clients. This includes each of the 10 largest multifamily property management companies in the US.

The US housing market has also been the subject of congressional concern in recent years. In late 2022, US Senator Sherrod Brown wrote to Federal Trade Commission Chair Lina Khan to urge the agency to review property owners’ and landlords’ use of price optimization software following reports that the software’s algorithm inflated rents and suppressed competition in the housing market.

US Senators Elizabeth Warren, Ed Markey, Tina Smith, and Bernie Sanders also wrote to the DOJ in March 2023, calling for a review of RealPages’s YieldStar algorithm following new findings from their own investigation. They warned that Yieldstar, which is used to set prices on millions of rental properties, could be “facilitating de-facto price setting and driving rapid inflation”.

In late 2023, RealPage and 14 of the largest residential landlords in the District of Columbia were hit with a lawsuit by the DC attorney general for allegedly conspiring to forgo competition, share sensitive information and delegate rent-setting authority to RealPage to charge inflated rents.

For the inside track on antitrust investigations, litigation, enforcement trends and changes to policy, in the US and across the globe, activate your instant trial of MLex today.

30 May 2024
By Nicholas Hirst

A UK merger decision not to probe Microsoft’s investment into Mistral AI ought to have been good news for tech companies in the space. Instead, it’s left some of them feeling quite queasy.

That’s because the grounds to suspect that Microsoft might gain “material influence” over Mistral were threadbare at best — yet that didn’t stop the UK regulator from thinking hard about probing the French startup whose few dozen employees are at the forefront of the AI revolution.  But it’s also because the dependency test set out by the CMA is so vague and broad as to cast suspicion over almost any investment by a large tech company into AI developers or many other commercial relationships. That may chill investment into the sector, while also worrying developers.

The Competition and Markets Authority explained last week why Microsoft’s investment into Mistral did not qualify as a relevant merger situation.  That February investment saw the world’s most valuable company invest 15 million euros ($16.2 million) into France’s Mistral AI, which it can convert into a stake.

The deal would see Mistral use Microsoft’s cloud-computing power to train its AI and, in turn, it would make its AI models available on Microsoft’s platform.  The CMA, like many competition regulators, is very concerned that the AI economy may tip into the hands of the tech giants. It considered whether the agreement might give Microsoft “material influence” over Mistral, giving the merger regulator jurisdiction to scrutinize it.

In a 2006 probe into dairy companies, the UK regulator found it had jurisdiction to look at a 15 percent stake, in part because it accorded the investor a potentially influential board seat.  Amazon’s 2019 acquisition of a 16 percent stake in Deliveroo was considered to give it sufficient influence to warrant scrutiny, in particular because the e-commerce giant gained commercial influence via a seat on the board and its supply of cloud-computing power.

 

No material influence

By contrast, last week’s decision concluded that Microsoft did not gain material influence over Mistral. Its potential stake would represent less than 1 percent of Mistral and come with no special rights or board representation, the CMA said. Nor did the agreement to carry Mistral models on Microsoft’s platform lead to material influence, in particular given that Mistral remained free to put its models on other platforms. Neither did Microsoft’s offer of cloud-computing power, given it represented only a part of what Mistral expected to use.

Given that constellation of facts, it’s surprising that the CMA thought hard about calling in Mistral at all.  The decision shows that Microsoft provided a briefing paper, responded to two inquiry letters, and made a further submission on jurisdiction, all before the CMA went public on April 24. As for Mistral, it answered one questionnaire and made its own submission on jurisdiction, again all before April 24.

CMA’s guidance says it should only consider calling in “material influence” scenarios where “there is a reasonable chance” that the deal could warrant an in-depth probe. How could that be possible here? What is clear from the CMA decision is that it’s applying a very broad test of material influence.

It asserts that the supply of cloud computing or a distribution agreement can by themselves trigger “material influence” where it either “creates a dependency … such that it enables [the investor] to influence materially the commercial policy of the foundation model developer.”

The section on cloud computing goes into more detail, explaining that a dependency could be created by exclusivity requirements or “other terms that compromise the commercial freedom of the foundation model developer.”

For example, by stopping it from running its model on other platforms, the CMA says.

 

Dependency

“The standard throws into question so many relationships that are at the heart of the tech industry,” says one critic. “What’s the limiting principle?”

Would material influence catch Netflix’s use of AWS’s cloud service or Qualcomm’s sale of key semiconductors to Apple or any deals licensing standard-essential patents?

It would certainly seem to snag any supply deal struck by a company enjoying a dominant position in the market in question. Why, then, haven’t the many investments into AI of Nvidia — the dominant supplier of semiconductors of AI — drawn UK scrutiny yet?  (Nvidia declined to comment, but a corporate filing yesterday said it had received questions from UK, EU and Chinese regulators about issues including its AI investments.)

But dependency is a wider concept than dominance. For example, European laws on “economic dependency” catch the commercial relations between brands and retailers.  It’s easy to imagine R&D partnerships between large and small pharma companies also giving rise to dependency.

How should one know which of all such commercial relations will draw the attention of powerful UK merger regulators, let alone trigger concerns?

 

Uneven enforcement

That matters for at least three reasons.  While some AI developers have welcomed regulators’ scrutiny of Big Tech’s move into AI, they fear the current approach risks scaring off investors and cloud-computing partners. Developers at the forefront of the AI revolution are currently scrambling for cash, expertise and infrastructure.

Such developers are also not equipped to deal with competition assessments and inquiries, either in terms of in-house knowledge or resources. Thresholds are meant to shield smaller companies from the heaviest regulatory burdens.  And finally, the breadth of the CMA’s test means it can only ever catch a handful of the situations where a situation of “dependency” is created. That risks creating unequal treatment between companies within a single sector.

The CMA declined to comment specifically. It has previously said it hopes its inquiries into various Microsoft and Amazon AI partnerships will provide clarity to the market. So, expect future decisions to flesh out the reasoning in Mistral.  Microsoft seems resigned to a full-on probe into the billions it has invested into OpenAI, where the cloud computing deal is more significant. The logic of the Mistral decision confirms that.

For the inside track on merger developments, with unique insight from specialist journalists on the review process, procedural steps and ensuing court litigation, activate your instant trial of MLex today.

29 May 2024
By Amy Miller

In the absence of federal rules for artificial intelligence, US states are stepping in to fill the void, much as they did with data breach and consumer privacy regulation. Once again, state lawmakers are turning to the EU for guidance, and EU officials say they are happy to help.

Two months after the EU passed landmark legislation regulating the use of AI in 27 countries, Colorado became the first US state to enact a law building on the EU’s risk-based approach. It probably won’t be the last. EU officials are working with lawmakers and regulators in California and other states hoping to pass similar legislation to put guardrails around AI that could threaten fundamental human rights.

The EU’s office in San Francisco has a map of all 43 states where AI legislation has been introduced this year, said Gerard de Graaf, Senior EU Envoy for Digital to Silicon Valley, but much of the recent focus has been in Sacramento.

It is de Graaf’s job to promote EU tech policy and strengthen cooperation with Silicon Valley. Coordination is necessary, de Graaf said, because technology is a global industry, and regulators need to avoid forcing businesses to comply with different rules in different jurisdictions.

US states need to coordinate their regulation of AI across their borders, too, de Graaf said, and a first step is settling on a uniform definition of AI, and deciding which technologies should be regulated, without chilling innovation.

“I always say it’s bad rules that stifle innovation,” de Graaf said. “Good rules can actually support innovation and often do.”

 

Colorado follows EU

Like the EU AI Act, Colorado’s AI law focuses on consumer protection and high-risk AI systems. The EU AI Act bans emotional recognition technology in schools and the workplace, prohibits social credit scores that reward or punish certain kinds of behavior, and prohibits predictive policing in certain instances. The EU AI Act also applies high risk labels to AI in health care, hiring, and issuing government benefits.

Starting in February 2026, makers and deployers of high-risk AI systems in Colorado will also have to be far more transparent with the public about how their technology operates, how it’s used and who it could hurt.

The Colorado law imposes now-familiar notice, documentation, disclosure, and impact assessment requirements on developers and deployers of “high-risk” AI systems. Much like the EU AI Act, those are defined as any AI system that “makes, or is a substantial factor in making, a consequential decision,” such as such as housing, lending and employment. Makers and deployers will have to disclose the types of data used to train their AI.

There are obvious differences in scope and enforcement. The EU AI Act addresses how law enforcement agencies can use AI, while the Colorado AI Act does not. The Colorado attorney general’s office will be responsible for enforcement and has rule-making authority, and both developers and deployers of high-risk AI will have to demonstrate compliance with risk management requirements.

But not everyone is convinced. Colorado Governor Jared Polis, a Democrat, said he approved the legislation even though he had “reservations” it could hurt the state’s budding AI industry, particularly for small startups.

Despite that skepticism, Colorado’s groundbreaking AI law will likely be a model for other US states. It’s the most successful result, so far, from a bipartisan, multi-state AI working group seeking to coordinate AI regulations across state lines, and it builds on concepts from the US government agencies, as well as the EU.

State lawmakers wanted the EU’s input because interoperability was a primary goal of the working group, state lawmakers said during a livestreamed discussion on LinkedIn last week. The AI working group heard multiple presentations on AI from EU officials, as well as from privacy attorneys and scholars who followed the EU’s AI framework closely, they said.

“The goal was always, well the EU is doing it, so we can do it,” said Colorado Sen. Robert Rodriguez, sponsor of the Colorado AI Act.

 

Focus on California

EU officials have been particularly focused on California, the epicenter of AI technology and investment in the US, de Graaf said. In recent weeks, California lawmakers and regulators have met multiple times to discuss a wide range of AI issues with EU officials and leaders who prepared and shaped the EU AI Act.

De Graaf testified at a public hearing in Sacramento that the EU wants to set the global standard for AI regulation, much as it did for consumer privacy with the General Data Protection Regulation (GDPR). EU officials are “very keen” to work with California lawmakers on alignment, he said.

“I can tell you that our colleagues in Brussels are following very closely what you’re doing in California,” de Graaf told an Assembly privacy committee in February. “They’re fully aware of the bills that you have introduced, and they are very interested in these bills and further cooperation.”

Unlike Colorado, California lawmakers have introduced dozens of bills aimed at regulating various aspects of AI, from prohibiting discrimination to forcing companies to tell the public more about how the technology operates.

De Graaf said he is advising California lawmakers on three proposals that incorporate several aspects of the EU’s AI Act, including risk-based approaches to regulation, required testing and assessment of AI deemed high risk, and greater transparency requirements for AI-generated content. If enacted, the proposals would cover about 80 percent of what the EU AI Act regulates, de Graaf said.

Assemblymember Rebecca Bauer-Kahan, a San Ramon Democrat, is sponsoring AB 2930, a bill that would require businesses and state agencies to prohibit discrimination in automated decision-making technology. State Senator Scott Wiener, a Democrat from San Francisco, is sponsoring SB 1047, which would require developers of AI models to implement safeguards and policies to prevent public safety threats, and would also create a new oversight agency to regulate generative AI. Assemblywoman Buffy Wicks, a Democrat from the East Bay, is sponsoring AB 3211, which requires online platforms to watermark AI-generated images and videos. Last week all three AI bills passed out of the chambers in which they were introduced.

“It’s not just a one-way street,” de Graaf said. “It’s a two-way street where we learn from California, and they learn from us and we try to exchange the best ideas between Europe and California.”

For the latest developments in AI regulation and the tech sector, privacy and cybersecurity, online safety, content moderation and more, activate your instant trial of MLex today.

Ihre Vorteile:

Fachgebiete von MLex:

Verschaffen Sie sich einen Vorsprung bei den Themen, die Ihre Unternehmenslandschaft prägen.

Änderungen zu Richtlinien, Kartelluntersuchungen, Untersuchungen zum Missbrauch einer marktbeherrschenden Stellung, Kartellrechtsstreitigkeiten und Trends bei der Rechtsdurchsetzung

M&A-Entwicklungen, fachkundige, vorausschauende Analysen von spezialisierten Berichterstatter:innen über den Prüfungsprozess, die Verfahrensschritte und die Gerichtsverfahren, die sich aus genehmigten oder blockierten Transaktionen ergeben.

Wichtige Entwicklungen im EU-Beihilferecht und den Durchsetzungsmaßnahmen: EU-Untersuchungen und politische Änderungen bei der Unterstützung von Unternehmen und Sektoren durch EU-Regierungen.

Globale Handelsabkommen, sich ändernde Handelspolitiken sowie Handelsstreitigkeiten und deren Durchsetzung (einschließlich Antidumping-, Antisubventions- und Schutzmaßnahmenuntersuchungen).

Neueste Entwicklungen im Bereich Datenschutz und Regulierung mit exklusiven, zukunftsorientierten Berichten über GDPR und CCPA, internationalen Datenfluss, Cybersecurity sowie die sich ständig weiterentwickelnden Auswirkungen von AI.

Maßnahmen zur Regulierung neuer und etablierter Technologien, die eine vernetzte Welt antreiben. Umfassende Berichterstattung über Online-Sicherheit, Inhaltsmoderation, Plattformregulierung, künstliche Intelligenz, vernetzte Geräte und Telekommunikationsregulierung.

Berichterstattung über den Mobilitätswandel, der mit dem Aufkommen vernetzter, automatisierter, gemeinsam genutzter und elektrischer Fahrzeuge den Verkehr umgestaltet und weltweit neue regulatorische Fragen zur Daten-, Energie- und Industriepolitik aufwirft

Die wichtigsten Entwicklungen in der europäischen Energie- und Umweltpolitik, einschließlich der Regulierung der Energiewirtschaft, der Sicherheit der Energieversorgung, des Klimawandels, der Kohlenstoffpreise, der erneuerbaren Energien, der Netto-Null-Technologien, der alternativen Kraftstoffe und des Wasserstoffs.

Exklusive Berichte über die Aufsicht über das globale Investment- und Privatkundengeschäft, Schattenbanken, Kryptowährungen und digitale Finanzen, nachhaltige Finanzen, Clearinghäuser, Investmentfonds, Versicherer, Zahlungsdienste und mehr.

Änderungen in der Gesetzgebung zur Korruptionsbekämpfung, über Ermittlungen, Rechtsstreitigkeiten und Strafverfahren, Bekämpfung von Geldwäsche, Insiderhandel, Betrug und Marktmanipulation.

Erhalten Sie Einblicke in neueste Entwicklungen und künftige Regulierung von AI.

MLex hat sich als wichtige Informations- und Erkenntnisquelle erwiesen. Der MLex-Nachrichtendienst ermöglicht es meinem Team, rechtliche Entwicklungen auf der ganzen Welt zu erkennen und zu steuern.
Gavin Baird​
Public Policy Manager, Google

LexisNexis
Newsletter

Verpassen Sie keine Neuerscheinungen, Events und Produktneuheiten mehr!