1. General Legal Framework
1.1 General Legal Background Framework
The Philippines has no content-specific statute that names artificial intelligence (AI) as its main subject. However, there are laws and regulations that contain general principles, such as principles of civil law, and special laws designed for particular subjects, such as data privacy and intellectual property, that may apply to AI, given that AI deals with data sets and personal information.
Of particular importance is the law governing data privacy in the Philippines, the Data Privacy Act of 2012 (DPA). The DPA refers to certain automated processes which data privacy regulations further defined to be automated decision-making that is associated with AI. In general, the DPA regulates the processing of personal information by both government and private entities. The National Privacy Commission (NPC) is the regulatory body responsible for enforcing the DPA and ensuring compliance with data privacy standards, including automated processes that implicate data privacy. Under the Implementing Rules and Regulations (IRR) of the DPA, data subjects have the right to know about the processing of their personal data, including any instance of automated decision-making and profiling. Furthermore, the IRR of the DPA sets forth the general principle that a data subject must be given specific information about the purpose and extent of processing of their personal data, including any instance of automated processing.
In relation to automated decision-making involving AI, the NPC issued Circular No 22-04 which requires a personal information controller or processor who carries out any automated decision-making operation or profiling to indicate its registration with the NPC and identify the data processing system involved in the automated decision-making or profiling operation.
Other general laws that may be applicable to AI in the Philippines include intellectual property laws and consumer protection laws. The Intellectual Property Code provides a legal framework for protecting the intellectual property rights of AI developers and companies that use AI technologies. It encourages innovation and investment in AI by providing legal protection for AI-related inventions, works and trademarks. The Consumer Act of the Philippines provides a framework for protecting consumers against unfair or deceptive trade practices related to AI. It helps to ensure that AI systems are safe, reliable and transparent, and that companies that use AI treat consumers fairly and honestly. For instance, companies that use AI may engage in practices such as price discrimination, bias or discrimination against certain groups, or make false or misleading claims about the capabilities of their AI systems. The Consumer Act provides consumers with legal remedies against such unfair and deceptive practices.
It is worth noting that the Philippine government has taken steps to promote the development of AI in the country. On 5 May 2021, the Department of Trade and Industry (DTI) issued a National AI Strategy Roadmap for the Philippines (the “AI Roadmap”). The AI Roadmap is a plan that aims to promote the adoption and use of AI in various sectors of the economy. The policy and regulatory environment component of the AI Roadmap aims to create an enabling environment for the development and use of AI in the country. This includes the development of policies and regulations that promote innovation, protect intellectual property rights and ensure responsible and ethical use of AI.
2. Industry Use of AI and Machine Learning
2.1 Industry Use
Uses of AI and machine learning (ML) are rapidly growing. A number of companies and individuals are using AI and ML to perform repetitive tasks, analyse large amounts of data and optimise programmes.
There are numerous industry applications for AI and ML in the Philippines, including the following examples.
- Social protection – a chatbot software was launched by a non-partisan, youth-led movement for good governance of citizens and a legal-watch assistance programme that can simulate a conversation with a user in natural language to provide accurate and useful information to its users on eligibility concerns and other queries related to the Social Amelioration Programme (SAP) and social pension. This chatbot won the #DigitalAgainstCovid19 Innovation Challenge organised by the Asian Development Bank and Open Government Partnership.
- Energy – an AI-powered system was launched by a private tech company; it can assess user profiles and behaviour using historical and real-time data to find the best match for retail electricity suppliers (RES), allowing them to save up to 30% on their electricity bills. This system was supported by the Technology Innovation for Commercialisation (TECHNICOM) programme of the Department of Science and Technology – Philippine Council for Industry, Energy, and Emerging Technology Research and Development (DOST-PCIEERD). Moreover, a large power plant in the Philippines has integrated data science and AI in its operations by using a control loop system to feed regulated amounts of limestone to its furnace to significantly reduce sulphur oxide emissions.
- Water utilities – a Philippine start-up launched an AI solution which provides a real-time adjusting pump system (R-TAP) which can regulate pressure applied inside a water distribution system. This project benefitted from grants from the Department of Science and Technology (DOST) and the Development Innovation Ventures of the United States Agency for International Development (USAID).
- Customer service – numerous service providers in the Philippines have engaged software developers to create personalised chatbots that can communicate with and address the concerns of their customers.
There are many other use cases for AI and ML in other industries. In light of the benefits of utilising these technologies, one of the main objectives of the AI strategy of the Philippines is to recommend ways to effectively foster triple-helix research and development (R&D) collaboration among the government, industry and academic institutions, which is essential to national development.
3. Legislation and Directives
3.1 Jurisdictional Law
As discussed at 1.1 General Legal Background Framework, currently, there is no statute in the Philippines that is solely dedicated to AI. However, agencies under the Philippine government have set forth policies that aim to promote the development and adoption of AI technology in the country.
The Philippines today is among the first 50 countries in the world that have launched a national AI strategy. The AI Roadmap of the DTI aims to guide the country՚s AI development and adoption, and includes a comprehensive set of recommendations for government agencies, industry stakeholders and academic institutions to develop AI capabilities in the country.
The Philippine Supreme Court՚s Strategic Plan for Judicial Innovations 2022-2027 (SPJI) aims to improve the efficiency of court operations through the use of technology, including AI. The Chief Justice of the Philippines believes that AI can benefit court legal researchers, making legal research faster and easier. The SPJI will enable faster and easier access to legal references through the redevelopment of the judiciary’s e-library with AI-enabled tech. The plan includes the installation of a search engine using natural language processing and ML to generate analysis based on previous cases or legal precedents.
On 18 April 2017, Senate Resolution No 344 was filed at the Senate of the Philippines which seeks to look into the plans and initiatives of the government with the aim of maximising the benefits of AI and other emerging technologies for the Filipino people.
On 1 March 2023, House Bill No 7396 was filed with the House of Representatives which seeks to establish the creation of an Artificial Intelligence Development Authority (AIDA). If the bill is passed, the AIDA shall be responsible for the development and implementation of a national AI strategy.
3.2 EU Law
3.2.1 Jurisdictional Commonalities
This is not relevant in the Philippines, a non-EU jurisdiction.
3.2.2 Jurisdictional Conflicts
This is not relevant in the Philippines, a non-EU jurisdiction.
3.3 US State Law
This is not relevant in the Philippines, a non-US jurisdiction.
4. Judicial Decisions
4.1 Judicial Decisions
Although no judicial decision has yet been released in the Philippines deciding upon AI, Philippine courts tend to engage in lesson-drawing from their counterparts in the EU and the USA. Judicial decisions and rulings from state courts in the USA and from courts in the EU have persuasive effect upon Philippine jurisprudence.
One of the most significant cases in relation to generative AI and IP rights is the DABUS case decided by the European Patent Office (EPO). In this case, Stephen Thaler filed two patent applications, one for a “Food Container” and the other for “Devices and Methods for Attracting Enhanced Attention,” and named the Device for the Autonomous Bootstrapping of Unified Sentience (DABUS), an AI program, as its “inventor”. The EPO ruled that the AI program cannot be named as the inventor since the term refers to a natural person, which is the internationally applicable standard that has been adopted by various national court rulings. It was held that an AI system cannot exercise the rights of an inventor since it does not have legal personality. Thaler’s application was rejected.
Philippine courts will likely decide similarly to the DABUS case under EPO since, under the Implementing Rules of the Intellectual Property Code of the Philippines, an inventor who can apply for a patent must be any person, whether natural or juridical. Philippine courts will not likely treat an AI system as a natural nor juridical person and, thus, may decline to consider the AI system as an inventor of a patent.
4.2 Technology Definitions
The term AI is commonly used to refer to a broad range of technologies that involve the use of algorithms and ML to analyse data, recognise patterns and make decisions.
As mentioned at 1.1 General Legal Background Framework, the Philippines has not yet enacted any specialised statute or set of regulations that define AI, nor have judicial decisions provided any such definition.
However, in a speech, Philippine Chief Justice Alexander G Gesmundo revealed that the Supreme Court is looking to use AI to improve operations in the judiciary as part of its drive to unclog court dockets and expedite decisions. Through AI, “the SPJI will enable faster and easier access to legal research.” Judicial policy is in support of AI to enhance judicial processes.
5. AI Regulatory Regimes
5.1 Key Regulatory Agencies
There are several key regulatory agencies that play a role in the development and implementation of policies and initiatives related to AI in the Philippines.
- The DTI is the agency responsible for the promotion and development of the country՚s trade, industry and investments. The DTI is actively involved in the development of the AI Roadmap and works closely with other government agencies and private sector stakeholders to support the growth of the AI industry in the country.
- The DOST is the primary agency responsible for the development and implementation of policies and programmes related to science and technology, which includes AI. Its mandates include the formulation and implementation of policies, plans, programmes and projects related to science and technology. Furthermore, the DOST provides funding support for AI-related research and development projects, as well as other initiatives related to the development of the country’s innovation ecosystem, such as the development of national infrastructure, which includes AI R&D centres.
- The NPC plays a crucial role in ensuring that the use of AI by entities complies with the DPA, the NPC Circular No 22-04 on automated decision-making or profiling, and other relevant privacy and data protection laws and regulations.
- Intellectual Property Office of the Philippines (IPOPHL) is responsible for receiving and processing applications for patents, trade marks and copyright registration in the Philippines. The IPOPHL may receive and process patent and trade mark applications related to AI products and services. Software, digital databases or digital content related to AI-generated or AI-assisted works may involve copyrighted works over which the IPOPHL exercises jurisdiction.
- Department of Information and Communications Technology (DICT) is mandated to develop policies, plans and strategies to propel the use and development of the information and communications technology (ICT) in the Philippines. As such, the DICT assists in establishing policies that govern the utilisation and deployment of AI technologies in the country.
5.2 Technology Definitions
There is currently no regulatory definition of AI in the Philippines aside from the NPC’s rules on “automated processing,” and “automated decision-making and profiling.”
However, there are several bills pending before the Congress which propose definitions of AI, as follows:
- AI refers to the ability of machines or computer programs, systems or software that are designed to perform tasks that simulate human intelligence, such as reasoning, learning, perception and problem-solving; and
- AI shall mean the simulation of human thought processes in a digital, computerised or artificial model.
Should regulatory agencies adopt different definitions of AI, this diversity can pose challenges for businesses that may happen to operate under multiple regulatory schemes. Moreover, narrower definitions may lead to less regulation and greater freedom in the use of AI, whereas a broader definition may suffer from overbreadth, creating adverse effects upon potential commercial activity.
5.3 Regulatory Objectives
Owing to the DTI’s release of the AI Roadmap, the Philippines is now one of the first 50 countries in the world to have adopted a national AI strategy and policy. DTI Secretary Ramon Lopez stated that AI adoption can increase Philippine GDP by 12% by 2030, or equivalent to USD92 billion, according to research estimates. The AI Roadmap aims to transform the country into an AI powerhouse in the region.
The Philippines has been steadily advancing its digitalisation, improving its potential to become a global AI data centre in the future.
The DTI has several objectives, including:
- promoting consumer welfare;
- developing policies and programmes aimed at sustaining the growth and development of the Philippine economy;
- overseeing the effective implementation of laws on standards development, metrology, accreditation, testing and certification; and
- strengthening, promoting and developing an innovative and entrepreneurial ecosystem and culture in the Philippines through Republic Act No 11337 or the Innovative Start-up Act.
The DTI seeks to prevent harm by ensuring that businesses comply with laws on standards development, metrology, accreditation, testing and certification. The DTI also aims to promote consumer welfare by protecting their rights and interests, and to promote benefits such as job generation and inclusive growth through fostering a competitive and innovative industry and services sector.
The DOST oversees the creation and carrying out of policies and plans that encourage the use of science and technology for the advancement of the nation. Its objectives are to establish and advocate moral standards for the creation and application of AI. It has announced its initiatives for AI research and development which aim to provide and promote AI-powered solutions for a range of issues, such as traffic management, catastrophe risk reduction and marine debris monitoring.
The DICT is responsible for the protection of information and communications technology systems. It aims to improve internet quality and ensure that industries and support organisations have access to reliable and secure networks that are on par with acceptable global averages. It is also responsible for developing and promoting cybersecurity standards while preventing potential gender and social biases linked with the use of AI.
The NPC’s mandate is to protect the data privacy rights of data subjects. In the Philippines, data privacy is considered to be a human right. Data privacy rights are implicated in AI initiatives if there is the automated processing of personal information. The NPC aims to encourage that anytime personal data is processed, data controllers, particularly government entities, must:
- be transparent;
- have a legitimate purpose;
- apply proportionality; and
- be prepared to minimise potential risks to prevent harm to data subjects or people.
ICT is the area of interest of the DICT. Its responsibility is to develop and implement projects and policies that encourage the use of ICT for social improvement. As part of the protection of ICT systems, particularly those that use AI, the DICT is also responsible for developing and promoting cybersecurity standards.
On the other hand, the NPC’s mandate is to protect the data privacy rights of data subjects. In the Philippines, data privacy is considered to be a human right. Data privacy rights are implicated in AI initiatives if there is the automated processing of personal information.
The Department of Justice (DOJ), the prosecutorial arm of the Philippine government, is tasked to enforce criminal law, which includes cybersecurity and data privacy. The DOJ aims to prevent the harm of improper use of AI.
5.4 Enforcement Actions
To date, there are no published or publicly available AI-related enforcement actions among Philippine agencies.
6. Proposed Legislation and Regulations
6.1 Proposed Legislation and Regulations
Key proposed legislation on AI in the Philippines includes the following.
House Bill No 10457
House Bill No 10457 is entitled “An Act Establishing a National Strategy for the Development of Artificial Intelligence and Related Technologies, Creating for this purpose the National Centre for Artificial Intelligence Research, and for other purposes.”
This House Bill seeks to establish the National Centre for Artificial Intelligence Research (NCAIR), which is intended to be the primary policy-making and research body concerned with the development of AI and allied emergent technologies in the country. It shall focus on studying, harnessing, advancing, and/or transferring any beneficial AI creations or systems to uplift Filipino innovators, workers, industries, businesses and consumers.
Further, this House Bill seeks to reinforce the role of the National Innovation Council (NIC) of the Philippines by creating an AI sub-group under the NIC to promote synergy among the government, the private sector and academic institutions.
This House Bill provides for a national AI strategy aimed at, among others:
- developing the AI industry in the Philippines by reviewing and transforming business regulations for ease of business, especially launching new products, platforms, and services;
- providing funding for AI algorithmic innovations;
- incentivising higher educational institutions to promote AI research and development internships with local private institutions;
- identifying and supporting local AI start-ups;
- building a National Data Centre with reliable and robust infrastructure and data management systems;
- developing graduate programmes centred on data science and AI; and
- incentivising industries to offer learning and development programmes that improve digital/data literacy.
House Bill No 7396
House Bill No 7396 is entitled “An Act Promoting the Development and Regulation of Artificial Intelligence in the Philippines.”
This House Bill seeks to address potential risks and challenges by providing for a comprehensive framework for the development and regulation of AI in the Philippines. It aims to promote and regulate the deployment of AI technologies to ensure that AI systems are developed, deployed and used in a manner that is consistent with ethical principles, protect human rights and dignity, and serve the public interest.
Further, this House Bill proposes the establishment of the AIDA, which shall oversee the development and deployment of AI technologies.
7. Standard-Setting Bodies
7.1 National Standard-Setting Bodies
As of date, there are no formal, content-specific standards related to AI by industry and by government standard-setting bodies in the Philippines. However, there are calls for ethical guidelines for AI use in Philippine educational institutions.
Government and industry stakeholders in the Philippines increasingly recognise the need for ethical and responsible AI development and deployment. This is due to potential risks and challenges associated with AI, such as bias and discrimination, privacy concerns and potential job displacement. To address these risks and challenges, there is a growing emphasis on ensuring that AI is developed and used in a manner that aligns with ethical and social values.
7.2 International Standard-Setting Bodies
Today, data on whether companies in the Philippine jurisdiction effectively follow international AI standards remains unpublished. In the absence of formalised or official national standards related to AI, international AI standards can and do provide guidance on issues such as data quality, privacy and security, algorithmic transparency, ethical considerations, and other issues in the development and use of AI.
While there are no mandatory standards in place, as of date, it is recommended that companies doing business in the Philippines at the very least be aware of international AI standards such as those developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) and consider incorporating these standards into their company policies and practices. The Joint IEC and ISO technical committee, SC 42, is an international standards committee responsible for standardisation in the area of AI.
8. Government Use of AI
8.1 Government Use of AI
The Philippine government, both at the national and local level, has started to recognise the benefit of utilising AI. A number of departments and government units have established their respective AI strategies and/or agency-wide roadmaps with the aim of promoting the use of AI and integrating AI into their institutional systems.
Use cases of AI for government include data analytics and management, speeding up administrative processes such as applications for permits or licences, using AI algorithms in hiring government employees and contractors, and monitoring citizens’ activities to promote a safe environment. In recent news reports, even the Philippine Supreme Court is looking into using AI to improve the operations of the judiciary as part of its drive to unclog court dockets and expedite decisions.
Furthermore, reports show that certain government agencies in the Philippines are currently using facial recognition technology to identify persons under arrest warrants, those who are high-value targets, and members of terror groups evading law enforcers. However, the use of facial recognition and other biometric technologies by governments around the world has raised significant privacy concerns since facial recognition technology is being used to collect personal information without the consent or knowledge of individual subjects. Moreover, the use of biometrics, such as fingerprints and iris scans, raises concerns about potential misuse of data, unauthorised data sharing and risk of identity theft.
8.2 Judicial Decisions
Pending judicial decisions, if any, that are related to government use of AI in the Philippines remain unpublished. As discussed at 1.1 General Legal Background Framework, there are currently no Philippine Supreme Court cases that rule upon or define AI systems.
8.3 National Security
Information on whether the Philippine government uses AI in national security matters remains unpublished or non-public information. However, generally, AI has a number of potential uses in national security matters, including intelligence gathering and analysis, surveillance, threat detection and response, and military operations.
9. Generative AI
9.1 Generative AI
Generative AI is a type of AI that can be used to create new content, including audio, code, images, texts, simulations and videos.
Some of the emerging generative AI technologies today are ChatGPT and DALL-E 2. ChatGPT is a chatbot developed by OpenAI-designed to generate human-like text in response to a prompt or a question, based on patterns it has learned from vast amounts of text data on which it was trained. On the other hand, DALL-E 2 is an AI system that can create realistic images and art from a description in natural language.
Since generative AI initiatives are programmed as such by human developers, it is only natural that a number of issues arise from the manner in which human developers have prepared the data. Some of these issues include bias in generated content. Since the AI is fed with content that may be biased, the generated content will also be biased. Aside from biased data sets, the AI algorithms themselves may suffer from biased design.
Issues that pertain to intellectual property ownership over generated content may also arise. Although the generative AI tool creates the content, the content may include other published and protected works which, for organisations that do not audit generated content, may take it as it is, uncritical over whether content was copied from a copyrighted work. Organisations may be exposed to reputational and legal risk for publishing biased, offensive or copyrighted content.
Due to various issues on the uses of generative AI, different jurisdictions now strive to develop standards and legal frameworks for the development and use of generative AI. In particular, the issue on bias can be addressed by carefully selecting the initial data used to train AI models to avoid including incorrect or biased content, and by conscious use of inclusive stakeholder input. Organisations should not be completely reliant on AI tools, especially for making critical decisions. There must always be human intervention and human audit in terms of reviewing content that is generated by an AI tool to ensure that such is not incorrect, biased, offensive or protected work. One way to address this issue is for organisations to implement auditing systems and regularly monitor content generated by AI.
10. AI in the Practice of Law
10.1 Uses of AI in the Practice of Law
There are currently no specific regulations in the Philippines that govern the use of AI in the practice of law. This said, AI is increasingly used in the legal profession worldwide to automate routine tasks, enhance legal research and analysis, and improve overall efficiency and accuracy. Use cases include the following.
- Legal research: AI-powered legal research tools can help lawyers quickly identify relevant case law, statutes and other legal documents.
- Contract review and analysis: AI can be used to analyse and review contracts, identifying potential issues and areas of concern, as well as highlighting important terms and clauses.
- Predictive analytics: AI-powered predictive analytics can help lawyers anticipate legal outcomes and assess risks associated with different legal strategies.
- Document automation: AI can be used to automate the creation and drafting of legal documents, reducing the time and effort required to complete routine legal tasks.
In litigation, AI can be used in a variety of ways to improve efficiency and accuracy. For example, AI-powered discovery tools can help lawyers quickly identify relevant documents and information in large volumes of data, reducing the time and effort required for the discovery process. These tools can use ML algorithms to identify patterns and relationships in the data, making it easier for lawyers to identify key documents and evidence.
AI can be used in automated support services, such as chatbots and virtual assistants, to help lawyers quickly access information and resources. These tools can provide answers to common questions, help with legal research, and provide other types of support to lawyers, improving overall efficiency and productivity.
Overall, AI has the potential to transform the practice of law by enhancing efficiency and accuracy and by improving access to legal services. As AI becomes more integrated into the legal profession, it will be important for lawyers and regulators to work together to ensure that its use aligns with ethical and legal principles, and that the benefits of AI technology are maximised in the name of social good while minimising potential risks and harm.
10.2 Ethical Considerations
Some of the key ethical issues or considerations arising from the use of AI in the practice of law are related to confidentiality, bias and accountability of lawyers.
When employing AI tools, it becomes imperative for lawyers to ensure that these tools do not compromise the confidentiality of their clients՚ sensitive information. Maintaining the integrity of the attorney-client privilege requires the implementation of appropriate safeguards and adhering to strict data protection measures in the course of the exercise of the profession.
As for algorithmic bias, it is important to keep in mind that AI can perpetuate biases if it is trained on biased data, or if it is designed or programmed to make biased decisions based on biased data. Lawyers must be aware of the potential for bias in AI tools in terms of algorithmic design and data sets, and take steps to mitigate bias through the use of best practices, that is, by using diverse, inclusive data sets, regularly monitoring and testing AI models for bias, and incorporating human oversight into decision-making processes.
With respect to the accountability of lawyers, professional bodies must ensure that lawyers are ultimately accountable for the outputs or decisions made by AI tools which they use in the course of legal practice. This could mean carefully reviewing the output of AI tools and ensuring that the output is consistent with ethical obligations of lawyers. As AI tools today are not devoid of error, lawyers shall likely be required by courts to take responsibility for unintended outcomes. All in all, it is expected that professional bodies that regulate the practice of law shall require lawyers to use AI tools in ways that are consistent with ethical obligations under the lawyers’ Code of Professional Responsibility and Accountability (CPRA) and ensure that attorneys remain meaningfully responsive to the needs of clients.
11. Theories of Liability
11.1 Theories of Liability
There are two evident theories of liability under Philippine law for personal injury or commercial harm that may result from AI-enabled technologies, ie, contractual liability and tort/quasi-delict liability.
In the case of Cathay Pacific Airways Ltd. v Sps. Vasquez (GR No 150843, 14 March 2003), breach of contract was defined as the failure without legal reason to comply with the terms of a contract, or the failure, without legal excuse, to perform any promise which forms the whole or part of the contract.
As such, a person or entity which breaches a contract because of reliance on an AI-enabled decision-making tool, or an AI tool that automated the performance of an obligation which failed, may be held liable for breach of contract for failing to perform its obligation. The aggrieved party to the contract may file for specific performance, restitution, and/or damages in accordance with the Civil Code of the Philippines.
Moreover, a person or entity may be held liable for quasi-delict or tort. Under the Civil Code of the Philippines, whoever by act or omission causes damage to another, there being fault or negligence, is obliged to pay for the damage done.
The failure of a person or entity to review content or work done by an AI tool in the performance of an activity which may cause harm or damage to another may be negligence, as negligence is defined as the omission to do something which a reasonable person, guided by those considerations which ordinarily regulate the conduct of human affairs, would do, or the doing of something which a prudent and reasonable person would do. A person found negligent shall be liable for the injury caused to another person/entity.
Philippine courts will look to the decisions of courts in foreign jurisdictions as sources of persuasive, but not controlling, doctrine. For tort, Philippine courts occasionally turn to American jurisprudence, including state law. For contracts, Philippine courts in the past had borrowed from Spanish civil law, for the reason that the Civil Code of the Philippines was, in part, based on previous Spanish civil law.
In other jurisdictions, AI liability insurance has started to play a role in mitigating the risks of persons or entities that use AI in the performance of their work. For example, there has been a growth in the use of AI tools among healthcare practitioners. Some of these are AI virtual care management services which use analytics, AI and wearable technology solutions to monitor patients, enforce medical protocol adherence, provide virtual care and manage big data sets of medical records. In an article by Stern, Goldfarb, Minssen, and Price II (2022), well-designed AI liability insurance can potentially help mitigate the liability risks and uncertainties for medical practitioners, so that practitioners can use AI tools to support their diagnosis and treatment. On the other hand, insurance can help technology developers who hesitate to commercialise medical AI tools, since liability issues which persist tend to have an adverse effect on innovation.
However, the discussion on the liability of AI raises the question on who ought to be liable for the harm or damage caused by the use of AI technology – shall it be the AI developer or the user of the AI technology?
Philippine courts and regulators occasionally borrow from emerging regulations and jurisprudence elsewhere. In 2019, the EU released “Liability for Artificial Intelligence and other Emerging Technologies”. According to this document, liabilities may attach to different individuals. Persons operating AI-driven models in public places will be strictly liable for harm done by such models, while manufacturers of products that incorporate AI technology will be liable for defects in their products. The document also speaks of vicarious liability for autonomous systems, eg, AI-driven surgical robots used in hospitals. If a surgical robot malfunctions and harms a patient, the liability of the hospital, in making use of such technology, should correspond to the otherwise existing vicarious liability regime of a principal for its auxiliaries. It is possible that Philippine regulators, through the course of policy development and rule-making, may turn to EU developments as persuasive (but not controlling) sources of law.
11.2 Regulatory
There are no specific regulations regarding the imposition and allocation of liability on the use of AI in the Philippines. However, there are several background laws and regulations that could generally or suppletorily apply to AI systems, including the following.
- The Civil Code of the Philippines: the Civil Code provides for general principles of liability, such as the principle of fault-based liability and the principle of strict liability.
- The Intellectual Property Code of the Philippines: the Intellectual Property Code protects intellectual property rights, such as patents, copyrights, and trademarks.
- The DPA: the DPA protects the personal information of individuals.
- The Anti-Cybercrime Law of 2012: the Anti-Cybercrime Law prohibits a number of activities related to computer systems, including computer fraud, identity theft and cyberstalking.
A key challenge in developing regulations for AI is determining who should be held liable for the actions of AI systems. For example, if an AI system makes a decision that results in harm, should the company that developed the system be held liable, or should the person who deployed the system be held liable?
There is no easy answer, and it is likely that Philippine jurisprudence will evolve over time as AI systems become more sophisticated and localised. One can expect that courts of law will exhort legal professionals and judges to consider principles of justice and fairness in designing and developing AI-specific regulation, if new legal codes are to be seen both as enablers for AI business initiatives (innovation) and as human value-enhancing norms (dignity).
Key factors that are relevant to the imposition and allocation of liability may include the following suppositions:
- AI has the potential to cause significant harm, both to individuals and to society as a whole;
- it is important to develop regulations that promote ethical AI development and deployment; and
- a key challenge is locating liability, or the determination of who should be held liable for the actions of AI systems.
12. General Technology-Driven AI Legal Issues
12.1 Algorithmic Bias
The Philippines currently lacks enacted or proposed AI-specific regulations that address algorithmic bias, indicating a gap in regulatory frameworks in this area.
To counteract bias in algorithms, industry has undertaken certain initiatives. This includes multisectoral initiatives aimed at the formulation of ethical guidelines to guide the development and deployment of algorithms. Moreover, companies are exhorted to utilise diverse and inclusive data sets during the training process to mitigate algorithmic bias. Organisations have been called upon to implement internal auditing and monitoring mechanisms to detect and rectify any bias that may arise within their algorithms.
However, regulatory action concerning algorithmic bias is limited or unpublished. Nevertheless, in terms of regulatory development, NPC Circular No 22-04 provides that the NPC may issue cease and desist orders for failure of the personal information controller or processor to disclose its automated decision-making or profiling operation through appropriate notification processes set forth in the Circular. This is a manifestation of higher regulatory scrutiny over processes that involve AI in personal data processing.
12.2 Data Protection and Privacy
The DPA of 2012 provides that it is the policy of the state to protect the fundamental human right to privacy, while ensuring the free flow of information to promote innovation and growth. As such, the law provides for standards that personal information controllers and processors should comply with in order to protect the personal data of their data subjects. The protection of personal data shall benefit the data subjects by ensuring that these data will not be used by any unauthorised entity or for any unauthorised purpose. This is especially true with respect to personal data that is processed by AI systems.
Based on recent news, some companies which happened to have processed large volumes of personal data through AI systems have experienced cyber-attacks which resulted to data breaches. These data breaches have caused these companies financial loss, reputational damage and legal sanctions. Thus, it is necessary for entities processing personal data through AI systems to establish security measures and protocols that can respond to and address the consequences of cyber-attacks and other forms of personal data breach.
As discussed at 1.1 General Legal Background Framework, the DPA refers to automated processes which data privacy regulations further define to be automated decision-making that is associated with AI functions. The NPC is the regulator mandated to ensure compliance with data privacy standards, and this includes automated processes that implicate data privacy. Under the IRR of the DPA, data subjects have the right to know about the processing of their personal data, which includes any instance of automated decision-making and profiling. In relation to automated decision-making involving AI, the NPC issued Circular No 22-04 which requires a personal information controller or processor which carries out any automated decision-making operation or profiling to indicate its registration with the NPC and identify the data processing system involved in the automated decision-making or profiling operation. Registration and transparency requirements such as these form part of the bundle of risk mitigation measures of the NPC.
12.3 Facial Recognition and Biometrics
In the Philippines, the use of biometric data and facial recognition technology raises some legal concern regarding data privacy. Proposed amendments to the DPA seek to redefine “sensitive personal information” to include biometric and genetic data due to their inherent sensitivity. While AI technology and business practices can enhance the protection of personal data by identifying sensitive information and by automating compliance with data protection rules, they can also create risks such as bias and unauthorised access to personal data (a type of data breach) or third-party intrusion.
Moreover, processing personal data without human supervision may carry the risk of producing inaccuracy and unreliability, leading to unfair or harmful treatment of individuals. Pursuant to general contract and tort liability principles, organisations ought to consider the advantages and disadvantages of AI technology and business practices for protecting personal data, depending on the type of data being processed (eg, is the data considered to be sensitive personal information under Philippine law?) and specific use cases for AI. Appropriate safeguards must be implemented to minimise risks and maximise the benefits of AI.
One potential risk of AI technology is the creation of a “filter bubble,” which assumes what information users want to see and serves up content based on that assumption. Building AI models with data privacy in mind is key to ensure that consumer data is protected. The three characteristics of big data ‒ volume, variety and velocity ‒ allow for more powerful and granular analysis of data, but companies must prioritise trust over transactions to comply with new privacy rules.
Improper protection of personal data can result in database breaches, leading to identity theft, financial loss, damage to reputation, and legal liabilities or regulatory sanction. Therefore, businesses may find it necessary under Philippine privacy laws to implement appropriate security measures to protect personal data and ensure compliance with data privacy laws.
12.4 Automated Decision-Making
Automated decision-making technology refers to the use of algorithms, ML and other forms of AI to make decisions without direct human input. In the Philippines, only NPC Circular No 22-04 addresses automated decision-making by requiring the personal information controller or processor to disclose its automated decision-making or profiling operation through appropriate notification processes set forth in the Circular. If the NPC finds that there is a failure to disclose automated decision-making or profiling operations, the NPC may issue a cease and desist order against the controller or processor at fault.
The use of facial recognition and biometric information is subject to several laws and regulations that impose restrictions on the collection, use and disclosure of such information, as follows.
Under the DPA, personal information can only be processed by entities if such processing is not otherwise prohibited by law and if at least one of the following lawful criteria exists:
- the data subject has given their consent;
- the processing is necessary for fulfilling a contract or taking steps before entering into a contract;
- the processing is necessary for compliance with a legal obligation;
- the processing is necessary to protect the vital interests of the data subject, including life and health;
- the processing is necessary to respond to a national emergency or comply with public order and safety requirements; and
- the processing is necessary for the legitimate interests pursued by the data controller or third party, except when overridden by the fundamental rights of the data subject under the Philippine Constitution.
The consent of the data subject contemplated under the DPA refers to any freely given, specific, informed indication of will, in which the individual agrees to the collection and processing of their personal information. The consent shall be evidenced by written, electronic or recorded means.
Additionally, the Cybercrime Prevention Act of 2012 criminalises certain forms of cybercrime which would pose risks to companies using facial recognition and biometric information. The Act criminalises the following.
- Illegal access ‒ the unauthorised access to a computer system or network.
- Illegal interception ‒ the unauthorised interception of any data transmission.
- Data interference ‒ the unauthorised alteration, deletion or destruction of data.
- System interference ‒ the unauthorised hindering, interference, or causing the malfunction of a computer system or network.
- Misuse of devices ‒ the unauthorised use or possession of any computer device or system.
For instance, an unauthorised access to a computer system or network could potentially apply to the unauthorised use of facial recognition technology, if the latter is accessed without proper consent or authorisation.
As for the risks which companies may face, companies that use facial recognition technology in the Philippines could be confronted with legal and enforcement risks and potential liability under existing data protection laws. Violations of the DPA, for example, can result in significant fines and/or penalties as provided under the NPC Circular No 22-01 or the Guidelines on Administrative Fines. Facial recognition and biometrics are usually treated as opportunities for the application of data privacy laws.
12.5 Transparency
Under Philippine law, the use of chatbots and other technologies as substitutes for services rendered by natural persons are generally governed by the Consumer Act of the Philippines and the DPA. The Consumer Act does not single out AI but applies more generally to consumer-bound goods and services. Under the Consumer Act, providers of goods and services may be held liable for defects relating to goods and services offered to their customers, regardless of whether such goods and services were produced through AI technologies. This is in accordance with the state’s policy to promote and encourage fair, honest and equitable relations among parties in consumer transactions.
Further, the DPA provides that data subjects have the right to be informed whether personal data pertaining to them shall be, are being, or have been processed, including the existence of automated decision-making and profiling. The data subjects also have the right to object to the processing of their personal data, including processing for direct marketing, automated processing or profiling, and right to access information on automated processes where the data will, or is likely to, be made as the sole basis for any decision that significantly affects or will affect the data subject. Moreover, personal information controllers and processors that carry out any automated decision-making operation or profiling shall disclose this when registering their data processing systems before the NPC.
Today, AI tools are being used to make undisclosed suggestions or generate recommendations which suggest products or content to users based on their browsing history, search queries and personal data readily available online. Under Philippine data privacy law, such use of personal data should be made with the consent of the data subjects or with other lawful basis. Moreover, because this type of processing may constitute “profiling” as defined by data privacy regulation, the processing system must be specifically disclosed to and registered with the NPC.
On the other hand, some jurisdictions provide for safe harbours or exemptions from mandatory disclosure of AI use in certain circumstances. For example, in the USA, the National Institute of Standards and Technology (NIST) has developed a framework for safe harbours in situations where disclosure could reveal confidential business information which can be critical for AI systems developed by NIST-compliant entities. The Philippines may someday draw inspiration from this and incorporate safe harbours in its future AI legislation. However, with respect to AI, there is no explicit balancing of interests analysis under Philippine data privacy laws in ways that equate to a resort to safe harbours as understood in the USA.
12.6 Anti-competitive Conduct
The application of AI technology in price-setting may raise competition and antitrust concerns, particularly with the possibility of collusion among firms that use AI algorithms. This can result in higher prices for consumers and reduced market competition. The use of advanced AI algorithms can also lead to market dominance by certain firms, which can exclude competitors and harm consumers. Furthermore, the lack of transparency and accountability in complex algorithms used for pricing can make it challenging to detect and address anti-competitive behaviour.
To address these concerns, antitrust regulations and data privacy rules must establish standards for AI-related practices. In the Philippines, laws such as the Philippine Competition Act of 2015 and the Consumer Act of the Philippines protect consumers from anti-competitive behaviour, including price-fixing and deceptive pricing. The Philippine Competition Commission (PCC) is responsible for enforcing competition and antitrust laws in the country and is closely monitoring the use of AI in pricing. The PCC is prepared to take action against companies that engage in anti-competitive practices related to AI in pricing.
13. Sustainability and Climate Change
13.1 Sustainability and Climate Change
AI technology can provide valuable insights and predictions that can inform decision-making, help identify patterns and trends, and assist in developing strategies for various aspects of climate change mitigation and adaptation.
One area where AI can be of particular use is in the analysis of large data sets related to climate change. Using ML algorithms, AI can identify correlations and patterns. This can lead to more accurate predictions about future climate trends and a better understanding of the impact of human activities on the environment.
In the Philippines, the utilisation of AI in combating climate change is still in its nascent stages and has not yet gained widespread adoption or availability. However, there is growing recognition of the potential of AI in addressing climate-related challenges and fostering sustainable practices.
14. AI in Employment
14.1 Hiring Practices and Termination of Employment Practices
Many companies in the Philippines have begun using AI tools to pre-screen their applicants, conduct interviews and review employee performance. Although these AI tools are meant to improve HR operations, there is a risk that these tools may produce bias against certain groups of people, such as people who belong to a particular ethnicity, persons with disabilities, and women. This may result from biased information that are fed into the AI tool. This may further result to discriminatory practices in hiring, appraisal and termination.
Under the Labour Code of the Philippines, it is unlawful for an employer to discriminate with respect to terms and conditions of employment on account of sex, age, marital status, pregnancy status, disability, union membership, among others. The Code also designates specific just and authorised causes for termination, thereby prohibiting employers from terminating employment based on race, belief, religion, association, gender or medical condition.
Should discriminatory practices result from the use of automated technologies, this can give rise to a damage claim on the basis of unlawful discrimination. Also, if discrimination results to illegal dismissal, an illegal dismissal case may arise which may hold employers responsible for full backwages, damages and reinstatement of the terminated employee.
14.2 Employee Evaluation and Monitoring
Various technologies are being used to evaluate and monitor employee performance, especially in remote work settings. These technologies include time tracking software, project management software, communication tools, performance review software, and employee monitoring software. Time tracking software tracks the time employees spend on different tasks, while project management software monitors project progress and risks. Communication tools like email and video conferencing keep remote workers in touch with colleagues. Performance review software assists in creating and managing fair and consistent employee reviews. Employee monitoring software tracks employee activity on company devices, such as website visits, computer usage, and email communication, to identify potential problems like time theft and misuse of company resources.
The benefits of using technology to evaluate employee performance and monitor employee work are the following.
- Improved productivity ‒ technology can help to improve employee productivity by providing employees with the tools and resources they need to do their jobs effectively.
- Reduced costs ‒ technology can help to reduce costs by automating tasks that are currently done manually.
- Improved decision-making ‒ technology can help to improve decision-making by providing managers with access to data and insights that they would not otherwise have.
- Increased employee engagement ‒ technology can help to increase employee engagement by providing employees with opportunities to learn and grow, and by making their work more meaningful.
The use of technology to evaluate employee performance and monitor their work may potentially harm employees in several ways, including the following.
- Job insecurity ‒ there is a possibility of job insecurity, as the use of technology may allow employers to automate tasks that were previously performed by employees.
- Increased stress ‒ the use of technology may cause employees to feel more stressed as they may feel pressure to be always connected and productive.
- Privacy concerns ‒ there are privacy concerns associated with the use of technology, as employers may be able to monitor employee activity and communications.
- Discrimination ‒ there is a risk of discrimination, as employers may use technology to make biased decisions regarding hiring, firing and promotions.
Using technology to monitor and evaluate employee performance must be checked against relevant provisions of the Labour Code of the Philippines, such as those outlined below.
- Right to privacy ‒ both the Labour Code and data privacy laws recognise an employee՚s right to privacy. Without appropriate transparency measures and company policies in place, monitoring an employee’s computer and internet usage without their knowledge and consent may violate this right.
- Non-discrimination ‒ the Labour Code prohibits discrimination against employees based on their race, sex, age or religion.
- Right to security of tenure ‒ the Labour Code guarantees an employee՚s right to security of tenure, which provides for termination only upon just or authorised cause under due process.
In addition, the use of technology for employee monitoring and evaluation may also be subject to the DPA and the Cybercrime Prevention Act. Under the DPA, employers must ensure that lawful criteria exist before collecting, processing or storing an employee’s personal information. The Cybercrime Prevention Act prohibits unauthorised access, use or interception of computer data.
15. AI in Industry Sectors
15.1 Digital Platform Companies
The use of AI in digital platform companies, such as car services and food delivery, is increasing. These companies increasingly turn to AI technology to enhance services, streamline operations and reduce costs.
In the context of car services, AI is being used to predict demand for ride services, allowing companies to optimise their operations and reduce costs. AI algorithms can analyse historical data and real-time information to predict when and where demand for car services is likely to be highest. Moreover, AI is used to enhance the customer experience of car services. Some ride service companies are using AI-powered chatbots to provide 24/7 customer support and answer frequently asked questions.
In the food delivery industry, AI is being used to automate and optimise various aspects of the delivery process. It is used to predict customer preferences, optimise delivery routes, and improve the accuracy and speed of food preparation. For instance, AI is being used to provide personalised food recommendations to customers, based on their past orders, preferences and dietary restrictions. This enhances the customer experience and increases the likelihood of repeat orders. It is also used to optimise delivery routes in order to reduce travel time and distance between deliveries.
The use of AI in digital platform companies has significant implications for both employment and regulatory aspects. The increasing use of AI in digital platform companies has led to concerns about job displacement. As AI becomes more prevalent in these industries, there is a risk that certain jobs may become obsolete, leading to job losses and the need for retraining. Additionally, the use of AI in digital platform companies requires the collection and processing of large amounts of user data. This requires robust privacy management programmes and stricter data privacy compliance.
15.2 Financial Services
Since there is no content-specific code, statute or regulation targeting the use of AI in the Philippines, the use of AI by financial services companies is governed by other laws in the country, which can be laws of general application or special laws that apply to specific subjects. These laws and regulations include the DPA and other issuances of the NPC. Data privacy laws provide for rules that financial services companies are required to comply with in relation to collection, use, disclosure and disposal of personal information obtained from customers and clients. This includes data that may be processed by AI systems.
Furthermore, the Bangko Sentral ng Pilipinas (Philippine Central Bank) has established a regulatory sandbox framework to facilitate the testing and development of innovative technologies, including AI-driven financial technologies. This framework allows financial services companies to test their AI-driven fintech in a controlled environment under the supervision of a regulator.
There are, however, a number of risks involved with financial services companies using AI, including the following.
- Financial services companies may be storing and processing large amounts of personal and financial information of their customers. Lacking security measures and safeguards, this data may be prone to unauthorised disclosure which will subject the company to regulatory fines and administrative sanction.
- Bias and discrimination are also risks associated with some operations of some financial services companies. Through the use of AI that may have been trained on biased content, the AI system can be correspondingly biased towards certain demographic data, some of which may not be relevant to a customer’s credit rating.
- There may be operational risks, since AI systems can be complex and require significant resources to operate effectively. If the systems are not properly maintained, updated, and monitored, they can break down or generate incorrect results, leading to financial loss.
- Financial services companies may face possible reputational risk. Given that they must comply with a range of regulations when using AI, including data privacy, consumer protection, and anti-discrimination laws, the failure to comply with any of these regulations can result to significant reputational damage.
In mitigating such risks, financial services companies should take on an active role in managing AI systems, maintaining robust and reliable information security systems, and developing comprehensive protocols to monitor against bias and discrimination.
15.3 Healthcare
There are currently no special laws or regulations in the Philippines that govern the use of AI in healthcare.
The Philippine Council for Health Research and Development, under the DOST, in partnership with the Department of Health (DOH), has launched the Philippine Information Exchange (PHIE). The PHIE is a platform for secure electronic access and efficient exchange of health data and/or information among health facilities, healthcare providers, health information organisations and government agencies in accordance with set national standards in the interest of public health. The PHIE may include AI initiatives within its larger system.
The use of AI in healthcare comes with potential risks to patient treatment, including the following.
- Accuracy: AI systems՚ performance depends on the accuracy of their training data. Inaccurate or biased data can result in the AI system making flawed decisions.
- Bias: AI systems can replicate human biases if they are trained on biased data. This can lead to discriminatory treatment of certain patient groups.
- Security: AI systems are vulnerable to cyber-attack. Breaches could result in the theft or misuse of patient data.
One risk associated with the use of AI in healthcare is the presence of hidden biases in repurposed training data. Large data sets used to train AI systems can contain biases that can be transferred to the AI system. For instance, if an AI system is trained using a medical records data set that only includes patients from a specific race or ethnicity, it may be more likely to make inaccurate diagnoses for patients from other racial or ethnic groups.
There are various ways in which AI can enhance patient care and improve outcomes, including the following.
- Diagnosis: AI can assist doctors in accurately diagnosing diseases by analysing medical images and data.
- Treatment: AI can be utilised to create personalised treatment plans for individual patients and develop innovative treatments for diseases.
- Surgery: AI can guide robotic surgeons during procedures, which can reduce complications and improve accuracy.
- Healthcare administration: AI can automate tasks such as insurance claims processing and appointment scheduling, allowing doctors and nurses to spend more time with patients.
Although the use of AI in healthcare shows promise in improving healthcare delivery, there are several obstacles that must be addressed. These include concerns about data privacy, accuracy, cost and regulation. AI systems collect and process vast amounts of data, raising concerns about data privacy. Additionally, the accuracy of AI systems is reliant on the quality of the data they are trained on. Biased or inaccurate data can lead to erroneous decisions. Another challenge is the high cost of developing and deploying AI systems. Finally, AI systems are subject to a variety of regulations that generally apply to health technology, which can hinder their adoption and market entry.
In healthcare, AI is a valuable tool for digitalisation, precision medicine, and improving patient experience. Technology applications can encourage healthier behaviour, and the Philippines has already developed two AI technologies to aid healthcare during the COVID-19 pandemic. These technologies help in diagnosing COVID-19 cases and monitoring cases for local government managers, as well as facilitating doctor-patient communication and reducing hospital wait times through telemedicine. AI can perform various tasks autonomously, freeing up medical practitioners to focus on improving patient outcomes.
A start-up in the Philippines is using AI to streamline medical supply procurement for hospitals. This company offers a platform with AI canvassing, digitised procurement and direct-to-brand pricing.
Robotic surgery in the Philippines has some challenges, including high costs for purchasing and maintaining robotic surgical systems, expensive and time-consuming training for surgeons, limited availability in some hospitals, and insufficient insurance coverage. Nevertheless, the use of robotic surgery is increasing in the country, mainly because of its advantages over traditional surgery. Benefits include greater accuracy and precision, minimally invasive procedures resulting in less pain and quicker recovery time, and improved cosmetic outcomes due to smaller incisions. Robotic surgery is particularly useful for treating urological cancers.
ML is playing a vital role in digital healthcare in the Philippines, with several key functions. First, ML can help doctors make more accurate diagnoses by analysing medical images and data to identify patterns indicative of diseases. Secondly, ML is being used to develop new treatments for diseases, and personalise treatment plans for individual patients by identifying genes responsible for diseases and developing drugs to target them. Thirdly, ML can guide robotic surgery to improve accuracy and reduce complications by identifying the best surgical approach for a particular patient and by controlling robotic surgical instruments. Lastly, ML can automate tasks such as scheduling appointments and processing insurance claims to allow doctors and nurses to spend more time with patients and send reminders to patients at risk of developing a disease. Although ML is still in its early stages, it has the potential to transform healthcare delivery, leading to better patient care, improved outcomes, and more affordable and accessible healthcare for all.
Natural language processing (NLP) is an area of AI that involves the interaction between computers and human languages. It has several applications in ML, such as machine translation, text summarisation and question answering.
In the context of Philippine laws, the DPA is a key regulatory framework for government agencies and private sector participants in the healthcare sector, with respect to any aspect of the data life cycle of healthcare data.
16. Intellectual Property
16.1 Applicability of Patent and Copyright Law
Issues as to whether AI technology can be an inventor or co-inventor for patent purposes or an author or co-author for copyright and moral right purposes are not clearly settled under current Philippine laws. There is currently no published judicial or agency decision discussing this matter.
However, under the Intellectual Property Code, an “author” refers to a natural person who created the work. This may suggest that AI technology cannot be named as an author for purposes of copyright. As for patent law, “inventor” means any person who, at the filing date of application, had the right to the patent.
In the Philippines, the issue of whether AI technology can be an inventor or co-inventor for patent purposes or an author or co-author for copyright purposes and moral right purposes is still a topic of debate. However, as discussed at 4.1 Judicial Decisions, it is likely that Philippine courts will generally follow sister courts in the USA and Europe (eg, DABUS case decided by the EPO).
16.2 Applicability of Trade Secret and Similar Protection
The use of trade secrets and intellectual property rights can be an effective means of protecting AI technologies and data. Trade secrets are IP rights on confidential information which may be sold or licensed. This information can be commercially valuable to persons or entities, giving them competitive advantage in their respective industries. Thus, companies can claim trade secrets on the AI technologies that they developed, their products and services produced through AI, and the data that they have generated with AI. Companies can also apply for other IP rights such as patents and trademarks to protect such AI technologies.
In order to protect their AI technologies and data using trade secrets and IP rights, companies should consider identifying the trade secrets and IP that are critical to the AI technologies and data they are using or operating. Companies should consider promptly applying for the registration of their IP, and implement reasonable measures to protect their IP rights.
Companies should consider developing contractual provisions in agreements involving the use of AI technologies and data, which shall require third parties to protect the same and prevent unauthorised use by and disclosure to other entities. They can also provide for non-compete, non-disclosure, and restricted use. Furthermore, companies should be active in monitoring the unauthorised use or disclosure of trade secrets. This can include conducting audits of their AI systems to ensure that the information is being used only as authorised and as designed, and take legal action against any unauthorised use or disclosure.
16.3 AI-Generated Works of Art and Works of Authorship
The Intellectual Property Code of the Philippines provides that only original intellectual creations in the literary and artistic domain, such as drawings and paintings, are copyrightable and protected from the moment of their creation. However, copyright ownership only belongs to the natural person who created the work or whose name appears on it.
The IPC allows for derivative works based on pre-existing works. This means that a human author may be able to claim copyright protection for an AI-generated work if they have made a meaningful contribution to it. For example, if a human author provides the AI with the initial concept or idea for the work, or if the human author makes significant changes to the AI՚s output, then the human author may be able to claim copyright ownership.
16.4 OpenAI
OpenAI has developed a number of language models, such as ChatGPT, that can be used to generate written content. OpenAI՚s development of ChatGPT raises intricate questions regarding the ownership of intellectual property when these models are utilised to generate new works, such as books or screenplays. Determining the rightful ownership of such works becomes a complex matter that requires careful analysis and consideration.
Additionally, using OpenAI՚s language models for generating content or material may result in inadvertent infringement of copyright or trademark. OpenAI does not provide any guarantee that the content generated will be free of intellectual property issues.
17. Issues for In-House Attorneys
17.1 AI Issues for In-House Attorneys
Since there are currently no statutes and regulations which, as a whole, specifically address the development and deployment of AI in the Philippines, in-house attorneys may be faced with legal uncertainty in advising on compliance issues. It will be important for in-house counsel to be able to gauge the risk appetite of one’s corporate client, on one hand, and gain a situation sense of regulatory scrutiny (or lack of it), on the other. This will remain to be a challenge. A key initiative would be to take an active role in maintaining dialogue with relevant government agencies if only to discuss the boundaries of permissible AI use.
Because the use of AI would usually implicate personal data, in-house attorneys should ensure that their companies’ use of AI technology should always be built upon one or some lawful criteria under the DPA, such as consent or legitimate business interest. Security measures must be in place to prevent unauthorised access, use or disclosure of personal data. Also, AI technologies, especially generative AI, can raise questions related to ownership, licensing and infringement of IP rights. In-house attorneys should ensure that their companies have appropriate policies, protocols and agreements in place to address such issues.
There are other potential risks and liabilities associated with the use of AI, especially if the services of the company produced by AI systems fails to meet legal standards and requirements that apply to the company. In-house attorneys should consider:
- developing and implementing risk management strategies and policies to mitigate risks and liabilities that AI-driven data systems may bring about;
- being proactive in identifying and monitoring potential legal issues in the use of AI and new legal developments elsewhere; and
- working closely with all stakeholders to ensure that their company’s use of AI technologies is compliant with applicable laws and regulations.
18. Advising Corporate Boards of Directors
18.1 Advising Directors
When advising corporate boards of directors in identifying and mitigating risks in the adoption of AI in the Philippines, there are several key issues that should be considered.
- Ethical considerations ‒ companies must consider the ethical implications of using AI and ensure that the technology is being used in a responsible and ethical manner. This includes addressing potential biases in the data and algorithms used in AI systems. Ethical use may implicate background human relations norms found in Articles 19-21 of the Civil Code of the Philippines.
- Privacy and data protection ‒ companies must ensure that the use of AI is in compliance with Philippine data protection laws, particularly the DPA, which requires companies to apply lawful criteria (including consent) before collecting and processing personal data. One important guideline is NPC Circular No 17-01, which requires data controllers to notify the NPC of their automated decision-making systems. The circular requires that data controllers provide the NPC with a description of the automated decision-making system, including the purpose of the system and the types of personal data that will be processed. The Circular also requires that data controllers provide information about the logic involved in the decision-making process, as well as any human intervention that may be involved.
- Security risks ‒ companies must address potential security risks associated with AI, including cyber-attacks and data breach.
- General legal and regulatory compliance ‒ companies must ensure that their use of AI is in compliance with applicable laws and regulations, including those related to intellectual property, competition and consumer protection.
- Workforce displacement ‒ companies must address the potential impact of AI on the workforce, including the displacement of workers, and develop strategies to mitigate these impacts. AI has the potential to displace both low-skilled and highly skilled workers.
- Transparency, accountability and explainability ‒ companies must ensure that their use of AI is transparent and accountable, and that they are able to explain how their AI systems make decisions.
- Governance and oversight ‒ companies must establish appropriate governance and oversight structures to ensure that the use of AI is aligned with the company՚s overall strategy and goals, and that risks are identified and mitigated in a timely and effective manner.
By addressing the foregoing key issues, corporate boards can ensure that the adoption of AI in their companies is done in a responsible and sustainable manner, while maximising the benefits of transformative technology.