Why we need to reconsider how knowledge and innovation are measured




Access to Knowledge

Intellectual Property / Intellectual Property Rights

2021-05-05 11:19:15 1511 0

Knowledge and innovation have taken the lead as drivers of economic growth in developed countries and so there has been a growing interest by the international community to measure and assess progress in this area. Excited to embark on new research on alternative metrics of knowledge and innovation at A2K4D, the student of international development in me could not help but right away remember debates on how important it is to measure development from both a mainstream and more alternative perspective. How are existing measures of innovation and knowledge constructed and how useful are these in bringing about positive policy change or change through other channels? Why do we need to measure innovation and knowledge in the first place? One particular method of measuring development is using indices. For example, the Human Development Index produced by UNDP since 1990 combines variables for life expectancy, literacy and enrollment rates together with GDP to produce a country ranking for ‘human development’. Economist Martin Ravallion calls this type of index a ‘mashup’ According to Ravallion, a mashup is a composite index with an unusually large number of moving parts that the producer is essentially free to set. The research team producing a mashup index is at liberty to include any variables they see fit in measuring a particular phenomenon. Ravallion posits that while indices of the sort are attractive with their supposedly simple ranking of countries and their ability to combine multiple dimensions together into a single figure, what does it actually mean to rank number 1 on the HDI or any other mashup? ‘Mash up’ indices of knowledge and innovation There is an endless amount of similar mashups for different aspects of development, ranging from gender inequality (Gender inequality Index) to indices that assess the business environment in different countries, most famously the Ease of Doing Business Index. Knowledge production and innovation are no exception. A quick Google search reveals a number of mashups that measure knowledge and innovation using a similar one size fits all approach. The most prominent index is the Knowledge Economy Index (KEI) based on the Knowledge Assessment Methodology (KAM) produced by the World Bank. The Knowledge Assessment Methodology is based on the premise that since knowledge is increasingly becoming a key driver of economic growth, it is important to assess different countries’ “readiness for a knowledge economy’.” One hundred and forty eight variables are available to help identify areas of strength and others that require policy intervention. These variables are used to gauge the status of four particular areas identified by the KAM as ‘pillars of the knowledge economy’: appropriate economic policies and institutional frameworks conducive to knowledge creation and dissemination, a skilled and educated labor force, a supportive innovation system, and modern information and communications technology infrastructure. The Knowledge Economy Index (KEI) is calculated based on a simple average of the score for each of the four pillars (scores are normalized and assigned a value between 0 and 1) of the knowledge economy, using a basic scorecard of 14 variables. The four sub-indices are: the Economic and Institution Regime Index, which includes variables such as tariff and non-tariff barriers, and regulatory quality; the Education Index includes average years of schooling and enrolment rates; the Innovation Index includes patents granted, and royalty payments and receipts; and the ICT Index consists of variables like Internet and computer penetration rates. As of 2012, the KEI included 146 countries, with Sweden ranking as the top knowledge economy and Myanmar at the bottom. The Knowledge Index (KI) does not differ much for the KEI, it only excludes the Economic and Institution Regime Index. K indeces Source: Knowledge Economy Index (KEI) 2012 Rankings Other prominent indices in the areas of knowledge and innovation include the Global Innovation Index. It is produced by the World Intellectual Property Organization, Cornell University and the European Institute for Business Administration at (INSEAD). Some organizations also produce their own rankings of innovation and knowledge such as the Bloomberg Innovation Index and the Open Knowledge Index. The trouble with mash-up indices Mashup indices like the KEI reflect a mainstream understanding of development, whereby pre-set models are used to explain economic phenomena and issues. A good example are the Structural Adjustment Programs (SAPs) administered by the World Bank and the International Monetary Fund in the 1990s where a single dismal recipe of neoliberal reforms was enforced on a host of developing countries to “fix” their economies. In a similar manner, this understanding of development has resulted in the propagation of a single method and a uniform standard by which to judge all countries’ efforts in innovation and knowledge production. This reasoning is embodied in indices like the KEI. According to this conceptualization, all countries should strive to reach the state of development or knowledge production and innovation that ‘developed’ countries are at. Since these countries are leaders, all countries should try to reach the same standards by the same means. This top-down approach assumes a similar path of development. Ravallion warned of the appeal of mash-up indices in the case of knowledge and innovation: the simplicity of the indices such as the KEI and their presentation through catchy, user-friendly websites like that of the Global Innovation Index may render them authorities in measuring knowledge and innovation. Ravallion raises a series of important questions with regards to indices of development: 1. What are we measuring 2. What are the trade-offs involved in the index and to what extent are variables weighted accurately (and what does this mean for the accuracy of rankings) and 3. How relevant or useful is the index for development policy? What exactly are we trying to measure? This involves having a specific theoretical grounding and translating it into quantifiable variables. Ravallion lists some of the concerns with regards to this question for the HDI; whose proponents argue it is grounded in Amartya Sen’s capabilities approach. But the index uses GDP to measure what is essentially economic well-being. Well-being, according to the capabilities approach, is more about a set of beings and doings, capabilities to live a good life- such as being in good health. But, good health is not necessarily best reflected in life expectancy, where people may live longer but unhealthier lives. The point is that a lot, perhaps too much, gets lost in translation from theory to measurement. Let’s take some of these questions to the KEI as an example and specifically the innovation sub-index. What is meant by innovation in the KEI, and specifically the sub index of innovation? The variables used in the basic score card for this sub-index are: royalty payments and receipts, patent counts (applications granted by the United States Patent Office- USPTO) and scientific and technical journal articles, which are to be found mostly in Western scientific academic journals. The KEI is clearly based on a specific conceptualization of what constitutes knowledge and innovation, best suited to the case of more developed countries. Another well-known definition of innovation is that of the Oslo manual. The manual defines innovation as the implementation of a new or significantly improved product (good or service), a new marketing method, a new organization method in business practices, be it workplace organization or external relations. There is no definition for knowledge per se. This definition is applied for all countries at different stages of development and indirectly makes a link between innovation and the market. As such, traditional knowledge and forms of innovation that do not necessarily reach formal markets may be overlooked. It is unjust to judge developing countries’ progress with regards to knowledge production and innovation using the same criteria as countries at much later stages of development, especially in a current global regime of maximalist intellectual property. Joseph Stiglitz argues that it is not only a resources gap but also a knowledge gap that separates developed and developing countries. Ha-Joon Chang writes extensively on how international agreements such as the Trade Related Aspects of Intellectual Property Rights (TRIPs) puts developing countries at a disadvantage by restricting the flow of necessary knowledge and technology from developed to developing countries. Chang gives historical evidence to highlight the importance of such flows of knowledge and technology in the industrialization of Britain in the sixteenth and seventeenth centuries and then continental Europe. Much of the transfer of technology and knowledge that enabled these countries to catch up took place outside the scope of the law. In addition to the problematic proxies used for knowledge production and innovation in terms of the development divide, the KEI does not capture all types of innovation and knowledge production, such as, for example, traditional knowledge production in Africa. Fred Gault, a renowned reference on innovation, acknowledges that innovation takes place in different forms in developing countries, especially in the sites of the informal economy and in smaller size firms. Innovation also takes place in much more unfavorable conditions to those present in developed countries. Ravallion then points out that it is important to consider the formula in place to determine what happens to the overall index when one variable changes compared to another, or the marginal rate of substitution between variables. The assignment of uniform weights to variables across countries is another point of concern. The reality is that the context in each country, especially in the heterogeneous group often labeled, as developing countries, is unique. And so, having a fixed marginal rate of substitution between variables and fixed weights for all countries is clearly problematic. While Ravallion is referring to the equal 1/3 weighting of GDP, life expectancy and mean years of schooling in the HDI, the same logic applies to the KEI. The KEI is based on a simple average of the four sub-indices and there is no change in weights for different regions or countries. But how important a change in the regulatory environment or innovation ecosystem is may be very different depending on the country under study and that is not accounted for in most mashup indices. The politics of measurements Besides the issues mentioned above, another pertinent question is how useful is an index for policy making or for effecting change? Gault stresses that measurement should be a means towards encouraging public debate and policy change. Ravallion questions the usefulness of mashup indices on the grounds that they do not account for differences in the context of different countries. By context, Ravallion means “the many conditions that define the relevant constraints on country performance.” It is no surprise that most mashup indices, including the KEI, the HDI, the EDBI reflect the development divide, with countries like Switzerland, Sweden or Singapore topping the indices and countries in Sub Saharan Africa coming at the very bottom. But the indices do not tell us about how the countries performed despite the different constraints that they face. For example, in the case of knowledge production, in Chapter 2 of the Arab Knowledge Report 2009, Nagla Rizk highlights key region-specific constraints on knowledge production that prevent the region from moving closer towards a knowledge society. These include political repression, constraints on creativity including censorship in its many forms and other constraints such as economic ones. Rizk also discusses aspects of what is necessary for an “enabling environment” in the Arab World with a particular focus on political but also economic freedoms. These issues are unique to the region and so indices that aim to assess the state of knowledge and innovation in the region would be more accurate if they take such issues into account. Gault provides an example of steps that have been taken to adopt a more bottom-up approach, such as the Bogota manual for measuring innovation, which adapts the Oslo manual for the conditions unique to Latin America and Caribbean countries. In the case of the HDI, Ravallion also asks to what extent do indices actually lead to better policies? In other words do politicians need these indices to tell them what areas need support? He gives an example from India where Kerala, compared to other states with twice its GDP like Punjab, fares significantly better on both indicators of mortality and literacy. These differences have not resulted in significant policy changes to improve the situation compared to that in Kerala. Ravallion suggests that the reason for this is not a lack of information but a combination of socio-economic-political processes. In other words, this disparity is not because politicians in Punjab do not have the necessary information to design better policies but it is rather a reflection of conscious political decisions by those in power. In the case of Kerala, prioritizing education and literacy has been on the top of the political agenda. For example, state spending on education in Kerala represents 38 percent of the state budget in 2014/2015. Political priorities are not captured properly in the KAM or in any of the indices based on an apolitical view of development. While the KEI includes a number of indicators on governance such as rule of law and regulatory quality, none of these actually capture where government priorities are in terms of spending. The index does not capture the political reasons behind the government prioritizing particular sectors at the expense of education for example. In a world where the playing field is far from leveled between the developed and developing countries on all fronts, it matters even more where government spending goes amongst those trying to “catch up”. Is spending on education, health, and supporting research and development a priority? Or is spending on security or mega projects taking precedent? To what extent can a ranking on an index lead to a political decision to shift such priorities? A2K as an alternative Ravallion is not insinuating that mashup indices should be abandoned altogether, but rather that they need to be more clearly grounded in theory and need to better reflect contextual factors. This way they could potentially be more useful for policy making. The status quo is one where a particular theoretical understanding of development and subsequently of knowledge and innovation dominates and guides the production of indices. A wider, more complex understanding of development that incorporates socioeconomic but also political processes and allows for more bottom up contribution is necessary. The access to knowledge (A2K) paradigm promotes a more horizontal and democratic conceptualization of knowledge production and of innovation that resonates with more alternative views of development. It emphasizes the right to receive but also to participate in the creation, manipulation and extension of information, tools and inventions, literature, scholarship, art, popular media and other expressions of human inquiry and understanding. Within the A2K paradigm, development is understood in its wider sense and prioritizes human and social development. Rizk and Lea Shaver in Access to Knowledge in Egypt define A2K as a ‘demand for democratic participation and global inclusion and economic justice’ and so by nature it becomes political. It is a demand to participate fairly in the production of knowledge and innovation and for this to be better accounted for. As such the A2K paradigm offers a more nuanced understanding of both development and of knowledge, encompassing more forms of knowledge production and innovation through a lens that is better equipped to capture a more micro, bottom-up image of what is actually taking place on the ground. From an A2K perspective, it is important to document the knowledge and innovation happening in developing countries, but going beyond the narrow definitions currently used by different indices or manuals. Empirical qualitative research such as case studies and interviews are essential in this regard. For example, the Open African Innovation Research and Training project (Open A.I.R), of which the Access to Knowledge for Development Center is the North African Hub, has done extensive empirical work surveying the knowledge and innovation coming from Africa that is not captured by the metrics used in the KEI. Qualitative, bottom up work, hence, allows for a nuanced understanding of the challenges and constraints each country or region faces and is thus likely to lead to more realistic and useful metrics of knowledge and innovation.