Go to Main Content Go to Bottom

Archive

2025-07-14 Lorenzo Diaferia, Leonardo Maria De Rossi, Gianluca Salviotti

AI in business: assessing and prioritizing areas of intervention

To successfully implement Artificial Intelligence, companies must rely on a structured evaluation that considers business impact and operational feasibility. Measuring economic effectiveness alone is not enough: it is essential to map technological maturity, data availability, infrastructure consistency, regulatory constraints, internal expertise, and ethical risks. A solid roadmap helps avoid overestimations and focus investments on what is truly achievable – understanding where AI can really make a difference, and where it cannot. An excerpt from the book AI Management by Lorenzo Diaferia, Leonardo Maria De Rossi, and Gianluca Salviotti (Egea, 2024).

The logic for defining a priority order in the areas of intervention and in the initiatives to be pursued must be based on a broad evaluation, which includes technical, business, regulatory framework, and internal competence elements. Several factors will indeed need to be aligned for the successful outcome of the initiatives. In general, there are two dimensions we can consider to define a priority order. The first dimension looks at the expected business impact of the initiatives and is mainly linked to an assessment of business opportunity. The second dimension instead refers to an assessment of feasibility in carrying out the initiative, including cross-cutting elements of a technological nature, of internal and external context, and of available resources. The two dimensions can then be combined in a final evaluation map, which allows the identified initiatives to be visualized in light of the assessments made.

Both evaluation dimensions can be determined through a structured analysis approach based on the evaluation of some key variables. Naturally, the following variables represent a working suggestion based on previous evaluation experiences but do not necessarily constitute an exhaustive list in every context. Nevertheless, the centrality of these elements means they should not be overlooked.

Starting from the expected impact of the initiatives, there are four key variables that should guide us in the evaluation.

  1. Business priority. This variable expresses the level of consistency of the initiative with the company’s development guidelines and the business opportunities that emerged from the analysis of the previous point of the AI roadmap. A high level of business priority is observed when the initiative is considered crucial for the identified development goals.
  2. Economic impact. This variable expresses the level of economic impact that the AI initiative should generate, in terms of increased revenues or reduced expected costs. In the evaluation, it is important to maintain a short-term logic, without giving in to the temptation to evaluate a hypothetical long-term economic impact that would risk being based on a growing number of assumptions that are difficult to verify.
  3. Organizational impact. This variable refers to the impact on internal processes and organizational structures expected from the adoption of the initiative. As with economic impact, it is advisable to choose a well-defined time horizon for the evaluation.
  4. Customer impact. This variable expresses the expected impact of the initiative on the customer experience. Depending on the initiatives being evaluated, it might be more appropriate to conduct an evaluation exclusively limited to internal organizational impact, or to external impact on the end customer – suggesting that these two variables may both be used or selected alternatively depending on the circumstances.

There are three interrelated points of attention that must not be underestimated in the analysis of these variables. As we have seen, AI applications learn from data and typically refine their performance over time through new data becoming available. The impact evaluation must therefore take into account the timelines, the scale, and the maintenance and updating efforts required to reach the performance level necessary to produce the desired impacts. Let us consider an example. Let us suppose that a virtual assistant to be included in the customer support of a telecommunications company must reach a level of accuracy that allows it to autonomously resolve 70 percent of customer interactions in order to produce a tangible economic impact. Let us also suppose that, at present, the data available for building the model is limited, allowing only an initial launch to handle a single type of customer request (for example, the disambiguation between network-related problems and issues specific to the individual user). Only in a second phase, through the collection of new data, might it become possible to expand the use cases covered by the virtual assistant. In this situation, impact evaluations must take into account that, although the potential impact might be high, internal conditions call for caution in the short-term evaluation, to avoid overestimation.

Having assessed the initiatives from the point of view of their expected impact, it is now necessary to carry out an analysis of the feasibility of the initiative, given the external and internal context of the company. Here as well, the variables proposed below serve as a starting point, which can be enriched or modified according to the specific characteristics of the company. Below, we present seven main variables that influence the evaluation of the initiative’s feasibility.

  1. Technical maturity. This variable refers to the maturity level of the technological solutions that enable the evaluated initiative. An application is typically enabled by one or more technologies, and a non-negligible feasibility element is the maturity level that these technologies currently offer. Let us compare, for example, a platform for creating corporate chatbots – which can recognize user intents within their requests – with a new solution that allows part of the software programming process to be automated starting from instructions expressed in natural language. Even if both applications are based on similar models, their levels of maturity may appear quite different, especially depending on the context.
  2. Data availability. This variable refers to an overall evaluation of the presence, quantity, quality, and update frequency of the data required for the initiative in question. The evaluation will also serve as a prompt for reflection on the datasets needed to solve the problem, considering, where necessary, data that fall outside the company’s perimeter. The evaluation regarding data availability serves as an initial useful sanity check of feasibility. However, whether a low score constitutes an insurmountable risk for the initiative also depends on other implementation-related variables, which we will cover in Chapter 4. To simplify: today, some initiatives can rely on a mature market of AI products that make experimentation and the creation of applications more accessible even with a limited initial amount of data. Nevertheless, in most high value-added cases, the availability of distinctive data in adequate quantity and quality represents an essential prerequisite. An iterative approach to the AI roadmap is also useful to refine this evaluation variable in light of the implementation considerations we will address later on.
  3. Infrastructure consistency. From a technical point of view, the feasibility of an initiative does not reside solely in the maturity of the underlying enabling technologies. By infrastructure consistency, we refer to the presence of contextual conditions – internal and external, depending on the case – that make the implementation of the initiative possible. For example, a project focused on improving the customer experience will be difficult to pursue in the absence of a technological platform that provides access to homogeneous data. As a case in point, an Italian manufacturing group which, after having grown significantly through acquisitions across Europe in recent years, undertook a predictive maintenance initiative on plants that were not integrated in terms of the enabling IT infrastructures – thereby encountering high levels of management complexity.
  4. Regulation. This variable expresses the need to carry out an evaluation not only from a technical and business point of view, but also to include in the evaluation metrics the regulatory context within which the initiative must fit. This is particularly relevant in the current context of Artificial Intelligence, which is undergoing a regulatory revision still in progress within the European Union at the time of writing this book, a discussion that began in April 2021. Legal implications must always be taken into account, given the rapid evolution of the regulatory context. For instance, Clearview, a company specialized in facial recognition software, was recently fined by the UK privacy authority for having used billions of images of users on the internet to train its solutions. Although the amount of the fine is limited to 7.5 million pounds, it is a clear signal that the evolution of the legal context deserves careful evaluation.
  5. Availability of competencies. This variable expresses the availability of the necessary competencies for the evaluated initiatives. It concerns both the internal availability of competencies and the ability to manage external expertise involved in carrying out the initiatives. In many cases, in fact, companies that do not operate in the AI sector struggle to develop internally the skills required for high-tech initiatives. To mitigate this issue, the strategy often includes creating an internal capability for monitoring and managing external vendors involved in the different phases of the initiatives – also leveraging their contribution for internal training purposes. A European pharmaceutical group, for example, recognized from the early stages that exploring AI to support the process of developing new drugs could not be carried out internally. One of the reasons lay precisely in the lack of internal expertise and resources that could be dedicated to the activity. The chosen path involved relying on a specialized external company to handle all the most technical tasks. However, for the group, it was crucial to develop an internal competence in managing the supplier, laying the groundwork for a twofold benefit: on one hand, the ability to monitor more critically and consciously what was being proposed externally; on the other hand, using the collaboration as an opportunity to further train its own internal resources.
  6. Financial compatibility. This evaluation element expresses the level of financial feasibility for the company. AI applications vary considerably in terms of complexity and the related costs involved. Financial compatibility with the resources available within the company for the initiatives may also be influenced by elements that emerged in the evaluation of infrastructure consistency. In the presence of a low level of internal infrastructure consistency, investments may in fact be required even to create such enabling preconditions.
  7. Ethical and transparency implications. The ethical issue and the problem of transparency in AI-generated predictions is a well-known concern. This variable is therefore intended to capture the ethical risks and implications related to the initiative. Naturally, a fully exhaustive evaluation is not always possible. However, an analysis of potential risks at an early stage can help trigger mitigation actions right away or even discourage the continuation of the initiative altogether. The most well-known and easily identifiable risk examples certainly come from application areas where an AI system’s prediction has a tangible impact on the lives of the users involved – touching upon high-risk areas. The healthcare and financial sectors are two of the best-known fields with high potential ethical impact. For example, in recent years several studies have shown a high predictive power of the presence of first and last name in a debtor’s email address on the likelihood of debt repayment. However, further studies have highlighted how, with equal creditworthiness and conditions, African American debtors with names statistically associated with their ethnicity suffer a negative impact compared to cases in which only identifiers unrelated to ethnicity are used. Furthermore, in a recent study in the medical field published in Lancet Digital Health, some researchers showed that an AI model is able to predict a patient’s ethnicity through chest X-ray images, with an accuracy rate of 94–96 percent. Expert researchers and radiologists were not able to determine the patients’ ethnicity from the same images, nor to understand exactly how the model arrived at its predictions. These examples are just two cases – among thousands that have emerged in recent years – that require companies interested in AI initiatives to carefully evaluate the implications, even in seemingly harmless scenarios.

This article is an excerpt from the book "AI Management – Strategies and Approaches in Business" by Lorenzo Diaferia, Leonardo Maria De Rossi, and Gianluca Salviotti, published by Egea as part of the series in collaboration with SDA Bocconi School of Management.

Photo iStock / Jian Fan

iStock_Jian Fan