This is my speech during the debate on the Online Safety Bill in Parliament on 8 November 2022.
The Online Safety Bill before us seeks to tackle harmful content on online services like Facebook, YouTube and TikTok, which are accessible to users in Singapore. I support the Bill, given the online harms that people in Singapore have been subject to on social media and on the internet, and the growing need to protect our people, especially the young, from these harms.
However, I have some clarifications to seek on the Bill which I hope the Minister will address before we vote on the Bill.
Protecting young persons
Access to digital communication devices is not optional in this day and age, even for younger children. For example, if a 10-year old child takes public transport on his own to and from school, his parents would want him to be able to contact them in case of an emergency or to track his location. In most cases, this can only be done using a mobile phone or smart watch. However, it would not be wise to give that same 10-year old unfiltered access to the internet on his phone.
Currently, parents can install a parental control app on their child’s phone. This app will allow parents to restrict content, approve apps, set screen times and filter harmful content. It can also locate the child using GPS.
I set this up for my son some time back. However, even with all my professional technical knowledge, it took me quite a bit of time and research to figure out which was the most suitable software to use and how to configure it. I wonder how many parents have tried to set up parental control software for their children. For those who haven’t, they should be aware that their children and teens have essentially unfiltered access to the internet, and all the harms that come with it. These parents can only regulate their children’s internet access by looking over their shoulders. This is a suboptimal solution given the asymmetry of technical knowledge between most parents and their children. Most children nowadays can run rings around their parents when it comes to configuring settings on their mobile phones.
Also, for such content filtering to work for young people, age verification is needed. The Code of Practice for Online Safety for Designated Social Media Services proposed by the Ministry states that social media services must have additional measures to protect children, including minimising children’s exposure to inappropriate content and ensuring that their account settings are age-appropriate by default.
However, the Code of Practice does not prescribe how this age verification should be implemented. Indeed, attempts at imposing age verification have previously failed in the United Kingdom’s implementation of the Digital Economy Act of 2017, in part because of privacy concerns — online age verification providers could collect excessive personally identifiable information and process it for other purposes in violation of privacy laws.
Separately, a young user can circumvent age restrictions by declaring his age to be 18 when in fact he is only 12.
Can I ask the Minister: How will content providers be required to perform age verification checks in practice?
Some internet service providers do provide parental control tools which block harmful content before they come through the fibre. However, they require a separate subscription that entails an additional cost every month. Many parents aren’t even aware of this service. They will have to take the effort to login to their broadband provider’s website and subscribe to this service. This is additional friction which will deter many parents from signing up, leaving young children vulnerable to accessing harmful content without their parents’ knowledge.
It would be better for internet service providers to block harmful content at the network level by default, rather than expect parents to set up complicated filtering software on their children’s devices. This remote filtering should be activated by default for all new mobile and broadband subscriptions, and offered for free for all subscribers. This will ensure that even children of less tech savvy parents will be protected by default. Adults who need full access to the internet should be able to opt out of the filtering service without any charge.
I am glad to note that under the Code of Practice, content that may encourage young users to engage in dangerous acts will be considered harmful content and be subject to additional safeguards for young users. Examples of these include the skull-breaker challenge where two people trick a friend standing in between them to take a vertical jump, then kick her legs from under her as she is in the air, making her fall backwards and potentially injure her head and back. People sometimes do not properly assess the risk associated with an activity. They may have seen others perform it without incident in a YouTube video, and may be tempted to experiment themselves.
Ultimately we cannot completely insulate young people from all dangerous, harmful and silly content online. The best protection is for parents and teachers to educate their children and students of the potential harmful content that may be accessed online and the consequences of indulging in them. The Media Literacy Council could also directly push out educational materials on the platforms that young people access like TikTok and Telegram. This should be an ongoing process, not a one-time effort, because harmful content is constantly evolving, and new trends are always emerging.
Under this Bill, failure to comply with the directions from IMDA could be an offence punishable by a fine on conviction. Can the Minister clarify if this fine will apply to only the company or also individual officers within the company responsible for ensuring compliance? Given the financial might of social media companies, they might have no problem paying even a huge fine.
The Code of Practice will require social media services to submit annual reports to IMDA to reflect Singapore users’ experience on the services, including the actions that they have taken on user reports.
I would like to propose that social media services be required to also submit quarterly reports listing the type of content that has been flagged by users. This is so that IMDA can keep apprised of trends in harmful online content and behaviours.
Section B of the Code of Practice requires that users of online communication services must be able to report harmful content or unwanted interactions to the platform providers through an “effective, transparent and easy to use mechanism” and social media services are expected to take action on these user reports in a “timely manner”.
This leaves open lots of room for interpretation. In contrast, Australia’s Online Safety Act requires platforms to provide a clear and easily accessible complaints system for end-users to submit complaints or requests to remove certain material, and the platforms must respond to the complaint within 48 hours, failing which the end-user may contact the eSafety Commissioner, who has the power to investigate the complaint. I would like to propose that Singapore’s Code of Practice include these specific requirements and timelines.
Another potential area of harm to young people is online gaming, which can be both addictive and cause social problems. Can the Minister share to what extent this Bill and the Code of Practice will regulate online gaming?
I note that the Bill covers cyberbullying content that is likely to cause harassment, alarm or distress to the target person. Will the non-consensual sharing of intimate images be covered in this Bill? There have been recent cases of disgruntled ex-lovers sharing such images, which most certainly cause alarm and distress to the victim.
Protecting our democratic rights
Next, I would like to seek clarifications from the Minister regarding the protection of Singaporeans’ democratic rights in this Bill. Some respondents to the public consultation sought assurance that the proposed measures would not affect user privacy or freedom of expression.
The Bill gives wide ranging powers to IMDA to issue directions to social media companies to remove content it deems harmful. Can the Minister elaborate on what safeguards will be in place to ensure that such powers are not abused? Will there be any channels for independent appeal or judicial review?
The United Kingdom’s Online Safety Bill specifically includes protections to safeguard pluralism and ensure internet users can continue to engage in robust debate online. For example, Section 29 of the latest draft of the UK’s Online Safety Bill requires content providers to “have regard to the importance of protecting the rights of users and interested persons to freedom of expression within the law” when deciding on safety measures and policies.
Section 15 of the UK Bill also requires social media services to put in place clear policies to protect “content of democratic importance” such as user-submitted comments supporting or opposing particular political parties or policies, and to enforce this consistently across all content moderation. The UK Bill also requires that platforms must not discriminate against different political viewpoints.
The UK’s draft legislation has also been designed to safeguard access to journalistic content. News publishers’ content will be exempted from social media platforms’ new online safety obligations. Because of this, social media platforms will not be incentivised to remove news publishers’ content as a result of a fear of sanction from the regulator.
Are there such provisions in Singapore’s Online Safety Bill? If not, will the Government study the online safety bills in other countries like the UK and Australia, and include democratic protections in the Code of Practice and subsidiary legislation?
Will Singapore have the equivalent of an eSafety Commissioner like Australia does? Who will this eSafety Commissioner be and will he or she be empowered to make directions independent of the Government?
Australia’s Online Safety Act itself was controversial in part because of the huge amount of discretion and power it puts in the hands of the Minister for Communications and the eSafety commissioner to determine what are community expectations. How will Singapore’s Bill safeguard democratic freedoms while protecting the young from online harms?
I note that some electronic services are excluded from this Bill. Examples of these are SMS and MMS services. Can I confirm with the Minister that other private messaging platforms like WhatsApp, Telegram and Signal are also excluded from this Bill? For the avoidance of doubt, I am not advocating for these services to be included in this Bill, as they are primarily used for private communication between individuals. Much of the communication is end-to-end encrypted, which means even the platforms do not have access to data exchanged by their users. I would have strong privacy concerns if this encryption were to be broken for the purpose of enhancing online safety.