You think about buying a new pair of trainers and suddenly on social media you are bombarded with ads for deals on trainers. You wonder if technology can now hear your thoughts.
“I think algorithms and artificial intelligence see patterns that you show on social media and calculate your next move,” says 18-year-old Namra. Most of her school friends also buy things online and forward each other ads about offers on products. Together, they often muse over what an algorithm exactly is and how it works.
“I am not all that concerned about sharing of information because everyone’s information is on the internet. Does it matter if mine is too?” asks Zohaib, a 29-year-old consultant.
In simple terms, an algorithm is a formula or set of instructions to solve a problem. It seems convenient to have artificial intelligence solving a problem or responding to a need. But is this technology permeating multiple aspects of modern living, truly free of any value judgment?
In the US, artificial-intelligence-driven assessments are being used in the recruitment process for firms like Unilever, Goldman Sachs, and Hilton to, for instance, analyse candidates for their employability. The algorithms filter out those that do not fit the model of the ‘ideal worker’ which can be discriminatory against certain groups or individuals. Housing, insurance and banking sectors, too, want to overturn laws and use automated tools to run background checks on clients.
In Pakistan, the debate on digital rights has just started.
“Algorithmic transparency is an issue all over the world. The public has no way to understand which data is being used and how and when algorithms are making decisions. We have to push for algorithms to be made accountable,” says Shmyla Khan, project manager at Digital Rights Foundation.
She explains how digital rights are an extension of rights into the digital sphere – the personal space has been ‘datafied’ and data is a source of profit. Khan points out the problem with considering the mainstream understanding of consent the touchstone.
“What we mean by consent and what constitutes consent is a bit complicated. Consent should be meaningful, explicit, and informed. Users should know what they are signing up for, and how their personal data will be used,” she says.
Companies are fixing this through push notifications relating to how their personal data will be used. However, the issue of consent is also tied to choice – whether or not it is possible to be weary or opt out of apps. In her view, until the power imbalance between companies who harvest data and consumers who use apps is addressed, fixes will be cosmetic.
“We saw how the Cambridge Analytica episode played out. The firm gathered data in 2014 and people only realised much later what had happened. The technical and legalistic onus is on the consumer. Even if a hyperaware user is using an app, it will be a challenge to protect one’s data as a lot of information about how apps use data is still hidden,” she says.
It is common for consumers to not read terms and conditions or the fine print carefully.
“Companies use the terms and conditions as waivers most of the time. Across the globe, especially in the EU, these issues are being taken up. But here, there is no proper legislation per se. So consumers need to exercise some level of caution,” says Farieha Aziz, founder of Bolo Bhi, a civil society organisation working for digital rights and civic responsibility.
Also read: What of the already vulnerable?
Aziz adds that consumers do know that there is some level of compromise, and those who can, will resort to using Facebook in internet browser instead of downloading the app as the latter is more invasive.
“A lot of data is housed here through the NADRA, the ISPs and tel-cos for instance. It is a difficult terrain to navigate,” she says.
“We need transparency. Even if data is being used for governance,” says Khan.
“Instead of making data a commodity that results in profit and control, it should be made an externality – produced and destroyed on agreed terms,” she adds.