Hi folks!
I’m back with this week’s newsletter where I serve the juicy bits from the tech-law drama, so you can stay in the loop without melting like an ice cream in this summer heat.
Before I begin, some news is in order for the readers. I have migrated this newsletter from Mailchimp to Substack. Email IDs of all subscribers will be deleted from the previous newsletter service- Mailchimp.
And this newsletter is now “The Private Bay”. Smart, eh?
With our house in order, let’s get cracking!
The Crypto Conflict
First, let’s log in to the volatile network of crypto exchanges.
The SEC took the crypto world by storm this week. It sued Binance and Coinbase for failing to register as a securities exchange and also accused Binance of artificially inflating its trading volumes and diverting customer funds (more on this here).
Crypto exchanges have long argued that tokens do not constitute securities and should not be regulated by the SEC. But the SEC doesn’t think so and said that Coinbase traded at least 13 and Binance with 12 crypto assets that are securities and should have been registered. It wants financial penalties for the exchanges.
The issue here is how the U.S. law defines ‘securities’. If an investment of money into a common enterprise generates profits from the “efforts of others”, it’s a security. Even a few cases that were decided in the courts ruled that investments in crypto relied on developers’ efforts to grow or maintain the associated blockchain, and as such the profits depended on the “efforts of others”.
Although these cases affect Binance and Coinbase U.S., it will be interesting to see if regulators in other jurisdictions also assert themselves over crypto exchanges. For example, the Nigerian SEC has ordered Binance to “immediately stop soliciting Nigerian investors in any form whatsoever.”
In the long term, we’ll have to see what survives other than Bitcoin (since it’s not developer dependent). For now, it seems like a major glitch in Web 3.0, that has caused tokens to nosedive.
Privacy Updates
Speaking of VR headsets, who’d have thought VR headsets would be harvesting data from our faces? Apple has launched the Vision Pro, and that takes our official count to 3.6 million devices that are trying to capture our personal data. But this device is certainly a pro- because it captures even more granular details about us. It will track eye movement to guess the age, gender, medical and physical conditions, and cognitive state. There are other measurements too- electrical activity in the brain, heartbeats and rhythms, muscle activity, blood density in the brain, blood pressure, etc.
And this small set of data will then be used to monitor our attention and behavior, and ultimately offer super personalized ads. An API also exists to allow sharing of eye-tracking-related data with third parties. So in case you are testing the VisionPro the next time around and Amazon suggests a butcher’s knife after an extended gaming session where you end up losing, don’t act surprised.
Continuing with some more privacy-related stuff, the Delhi High Court has directed Indian Kanoon to mask the name of a person who was acquitted of charges against him. The petitioner claimed his right to be forgotten, and courts have generally favored petitioners who were acquitted or were victims of a crime.
After Amazon, now Microsoft is now setting FTC charges of collecting children’s personal information without parental consent. The issue relates to Xbox, where children could sign up without notifying their parents or obtaining their consent.
At this point, I have a joke on the children’s right to privacy in India, but it’s still a draft.
Coming to the future of tech, a report by one committee appointed by the RBI to Review Customer Service Standards in RBI Regulated Entities has revealed how privacy may find a place in the Indian Fintech space. The committee has appreciated that data protection is more important than ever in a risky cyber-security environment.
It has recommended using biometrics to replace One Time Passwords (OTP) to prevent financial frauds where OTPs are obtained from victims, avoid using physical signatures and replace any such requirement with biometrics, and block mobile screen sharing to prevent screen sharing malware.
Additionally, the committee has recommended an automated alert that goes to the beneficiary bank whenever a financial fraud complaint is registered on the Indian Cybercrime Reporting Portal. Once a bank receives such an alert, the beneficiary bank should immediately block the equivalent amount in the account till detailed verification of the reported transaction in the complaint is completed.
I’m not sure about using biometrics instead of OTPs. It’s even easier to pursue people to put their finger on the fingerprint reader than give up their OTP, and it would also give rise to confusion because existing payment-related applications use biometrics for unlocking. The victim may think they’re just unlocking the app when in fact, they are authenticating a transaction.
Also, we have failed to educate users on the pitfalls of mobile banking in over a decade and I wouldn’t want to transition to something even more confusing. However, I think the automated alerts would be a game-changer. Banks right now demand letters from the police to freeze funds, and you know how that goes.
In Europe, the UK ICO has warned against the danger of discrimination and called for the inclusion of diverse groups in the “heart of development” in neurotech. And the Dutch Privacy regulator has sought more information from OpenAI about how it gathered the data that is used to create its software and how it stores it.
Artificial (Intelligence) Troubles
But AI has not only opened Pandora’s box in the privacy world.
A German stock photographer asked to get his images removed from a dataset used to train AI image generators. Interestingly, he was not only met with a refusal from the dataset owner but also an invoice for $979 for filing an unjustified copyright claim.
Turns out, the German copyright law has specific provisions on data mining (Sec44b) that permit temporary reproductions of data if they are “lawfully accessible” and “deleted afterward”. The dataset owner claimed this exception and said that it does not store any copies of the photographer’s work. Rather, it only found image files on the Internet for the initial training of a self-learning algorithm using crawlers and briefly recorded and evaluated them to obtain information.
Meanwhile, Microsoft has offered free GPT models to U.S. federal agencies using its Azure cloud services. And U.S. senators have introduced a bipartisan artificial intelligence bill requiring the U.S. government to be transparent when using AI. Besides governments, the EU’s Deputy Commissioner has suggested that companies should label AI-generated content to combat disinformation.
In India, MoS IT Rajeev Chandrasekhar has said that AI will be regulated from the perspective of harm it can inflict on users. That’s all good Mr. Minister, but where’s that Digital India Bill that was supposed to be released for consultation in the first week of June? 🙄
Latest hashtags
Lastly, there’s some chilling stuff coming from social media.
The Wall Street Journal reported that Instagram was not just helping pedophiles follow each other, its algorithms were promoting CSAM content. EU’s Commissioner on Internal Markets said he will discuss the issue with Zuckerberg later this month and also warned that Meta will have to demonstrate measures under the Digital Services Act after August 25 or face hefty fines.
And India has again issued some impossible rules without any legal consultations. The Health Ministry has asked streaming services to retrofit tobacco warnings in all content. Good luck to the folks at Netflix, Amazon, JioCinema, Disney, and more.
And that’s it for this week! Did you enjoy reading this newsletter? What am I doing well, and what can I improve? You can answer this email (or send me a message) to let me know your opinion.
This is Rohit, signing off!