Charting the Future of Artificial Intelligence


[By: Begoña Vega, Head of AI Models & Applications. Published in Big Data Magazine]


Artificial intelligence (AI) has experienced significant progress in recent years, transforming multiple sectors. It has proven to be a powerful tool in several industries. Task automation, informed decision-making and content customisation are some of the examples of how AI is useful at the enterprise level. 

Moreover, in recent years, there has been dazzling progress in various branches of AI such as artificial vision, enabling applications such as facial recognition or the detection of objects in images and videos. Also in natural language processing, especially in areas such as machine translation, language understanding and chatbots. Likewise, in the field of robotics, with the development of intelligent robots able to carry out complex tasks in dynamic and unstructured environments.

But if there is one area that has experienced a real revolution, it is the appearance on the scene of generative AI. Unlike predictive AI, which can identify patterns and make forecasts to carry out predefined tasks efficiently, generative AI can create new and unique content in the form of text, images and audio. 


The benefits of its use for companies are manifold. We can think, for example, of the development of products and specific responses for each user, customising even more the content of the messages they receive, thus improving user experience. Other uses may range from creating web content to making an advertising campaign. In the field of science and technology we can think of the generation of designs for the development of technological products, or in the conception of experiments, giving scientists the opportunity to research more quickly and efficiently. It is also an extremely useful tool for coders, to develop entire code lines from a set of functional instructions, convert to other programming languages or optimise code.    



However, the use of generative AI, of which ChatGPT is currently the leading exponent, has accelerated a series of misgivings in society, which go beyond the discussions around the ethics of AI in recent years: the privacy and security of the data the models are trained with, the spreading of fake and manipulated content, the attribution of responsibility and lack of control over the content produced, the impact on employment, especially in those positions of a creative nature such as journalism or the advertising field, or the increase in energy consumption due to the complexity of the models and the high computational capacity required for their development and exploitation, join other existing concerns such as the lack of transparency and explainability of the algorithms or the possibility of biases in the models.

Let us therefore reflect on what should be the stance adopted by companies to mitigate these concerns, by adopting a series of actions and procedures:

  • Data privacy and security: These models require large volumes of information to be trained and there is a risk of these data being used to unlawfully access and manipulate personal information. The implementation of a series of technical, organisational, and legal measures ensure the protection of the data algorithms are trained with: the anonymisation of data by means of the deletion or encryption of personal information, the implementation of policies of restricted and controlled access to storage systems through roles and permissions, the use of servers and systems with security measures in the infrastructure, and the application of regulatory compliance, using data with informed consent when sensitive data exist, are the main measures that help minimise these risks.


  • Misinformation and spreading of fake content: The application of AI algorithms and techniques to detect and identify fake news and misleading content together with independent audits of these systems, fostering media literacy to help people assess the quality of the information they consume and promoting transparency in information sources are some of the actions that can help counteract this misgiving.


  • The lack of transparency in artificial intelligence systems and algorithms also raises ethical concerns. Without understanding how decisions are made, it is difficult to trust them and assess their objectivity. Recording and documenting the whole development process, as well as the data used in the training ensure the traceability of the decisions made by the algorithm.  


  • Explainability is another crucial aspect; therefore, we must require the use of interpretable techniques to explain the answers, as well as to understand the most relevant features and provide clear and understandable reasons to understand the decision-making of the models.


  • The possibility of bias in AI models is another concern. Training models with data that reflect social biases or inequalities may result in discriminatory decisions. In this regard, using large and representative datasets including demographic diversity, as well as analysing and monitoring potential biases by analysing predictions across different demographic groups can mitigate these biases. The configuration of the teams that will be part of the development and implementation of the models also plays a key role. Forming diverse teams, with people from different backgrounds, experiences and perspectives can help identify and address these unconscious biases. Concepts such as equity and fairness should therefore be considered as central elements when developing and implementing these systems.


  • The issue of environmental sustainability must also be addressed. The exponential growth of AI and the computational cost derived from the complexity of the algorithms used implies an increase in the consumption of energy resources, which can have a very negative impact on the environment. Optimising the code and workflow of the model, as well as using efficient hardware and data infrastructures can contribute to reducing energy consumption. In addition, we must evaluate different algorithms and architectures to find the balance between accuracy and consumption in order to minimise the computational load, and consequently move towards a more sustainable approach in the application of AI.


  • Finally, the impact that the use of generative AI may have on employment raises serious concerns. According to the report published by Goldman Sachs in late March “The Potentially Large Effects of Artificial Intelligence on Economic Growth”, one in four jobs will disappear in both the US and Europe due to the effect of this technology. However, the same report offers hopeful conclusions, as it assures that new employment opportunities will also open up, creating new roles and jobs that do not exist today, as has historically occurred in other technological revolutions. It is also important to note that generative AI will not replace creativity or human judgement. Advertisers, for example, will still be needed to interpret and adapt AI-created content to the specific goals of each campaign. Data scientists will be able to focus on more complex and strategic tasks, applying their specialised knowledge and judgement to improve learning models. And so it will be with most of today’s roles, which will continue to be needed to interpret, validate and adapt whatever AI produces.


It is therefore vital to mitigate these misgivings as the development and application of Artificial Intelligence progress, by implementing regulatory frameworks that ensure its responsible use and by conducting independent audits to check that all these measures and procedures are being carried out. It is essential that governments, organisations and expert ethical committees come together to establish a joint regulatory framework to address the risks associated with the use of AI. However, we cannot let the slow pace of creating these regulatory codes hold us back.

It is crucial that we in business move at the same pace as AI evolves, without becoming paralysed, taking responsibility for the development and application of ethical and fair AI, encouraging and promoting fairness, transparency, privacy and security.  

In short, a responsible AI, geared towards process efficiency and for the benefit of society as a whole must be the one guiding our path to the future.