Espace adhérent

Is the data economy, destined to benefit only a few elite firms? PDF Print E-mail
User Rating: / 1
PoorBest 
Written by Amin ELSALEH   
Sunday, 25 June 2017 09:16

 

« Two weeks ago The Economist's cover article was about the data economy. It declared data the world’s most valuable resource and pointed out that “the five most valuable listed firms in the world” are data-heavy technology companies.

The Economist is not alone in marveling at both the value of data and how concentrated its benefits are. A recent Organization for Economic Cooperation and Development (OECD) paper showed that average productivity growth among the top 100 most productive manufacturers was 3.5%, while the rest of the entire market was at 0.5% The gap is even bigger in the services sector.

One of the things that helps this small group of firms consistently become more productive than their rivals is their creation and use of unique stocks of data capital. Data capital assets do not diffuse to rival firms.

This is a critical insight. If your whole data science team walks across the street to the competition, they take with them crucial experience to create better algorithms. But they can’t take your data, forcing the new algorithm to find its own fuel.

Is the data economy, then, destined to benefit only a few elite firms? No. Despite the landscape today, data trade and data liquidity will spread the benefits among more businesses.

 

Data trade, the buying and selling of data to create new businesses and streamline existing operations, has been going on for decades. Credit bureaus, for example, were early data traders, selling consumer financial data to assist banks’ lending decisions. But with new sources of data and demand to use it, data trade is exploding.

Oracle Data Cloud contains 5 billion global consumer profiles with 40,000 different attributes sourced from over 15 million websites. It also has 400 million business profiles and $3 trillion in consumer transactions. This data, originating with one group of firms, can be purchased by others to improve the efficiency of marketing dollars, better inform consumer product launches, or redesign social media publishing strategies.

The same old story about using more data to make better decisions gets a twist. Data created by one company gets used in another, in ways the first never imagined.

Data liquidity, getting the data you want into the shape you need for the task at hand, is an imperative for any company creating new digital products and services. But having data spread across diverse repositories, like Hadoop clusters and NoSQL, graph, and relational databases gets in the way. The pressure to use data in diverse algorithms, analytics and apps multiplies the problem exponentially.

Moving to the cloud is the fastest way to remove these barriers. But not all companies can instantly move all computing jobs to a public cloud. This is why Oracle offers a full range of Cloud at Customer machines, basically racks of Oracle’s cloud that run in your data center, fully managed by Oracle and with subscription pricing. No other technology company offers this.

The benefits of exploiting data capital to create competitive advantage are real, but highly concentrated today. Oracle’s reinvention of enterprise computing as a set of services that are easy to use and buy promises to bring these benefits to all firms, not just the global superstars. »

THE REFERENCE :

https://blogs.oracle.com/infrastructure/spreading-the-wealth-of-data-capitalism?elq_mid=81065&sh=26141813221582615221915017165&cmid=NAMK160705P00135C0002

My essay is an attempt to answer the following : « Is the data economy, then, destined to benefit only a few elite firms? »

Apparently that would be the issue till now. What are available tools to avoid this false target ?

Reference to my essay[i] on Stochastic Models in particular the section « Handling human social technical dimension; in particular man-system interface including positioning technology at man services » you may find guidelines to produce these tools and make BIG DATA exploitable by large majority of users :

1. Engine should trace “player” behaviour, evaluate its  capabilities and quickly meet its needs.

2. Immersion generated by simulation enables training and   experimentation of behaviour strategies, in particular learning “by doing”.

3. Engine should use following resources :

3.1. Tools to be customized by trainers.

3.2. Applied standards.

3.3. New learning approaches discovery through obtained   results, whether these approaches are positive or negative, in the sense of improving technology performance of assembled prototypes.

4. How SPDF (Standard Process Description Format) [ii]may produce a universal engine to run the stochastic model[iii] ?

4.1. SPDF consists of two parts :

4.1.1. Message structured-data part (including semantics) and,

4.1.2. Process description part (with higher level of semantics).

4.2. Two key outputs of the SPDF research will be a process description specification and framework for the extraction of semantics from legacy systems.

4.2.3. Note that : a)The more we may have semantic rules the more  unpredictable events are controlled.

b) Acquired knowledge to elaborate semantic rules for   unpredictable events requires many occurrences of the  stochastic model.

c) Convergence shall not be reached until getting more qualitative semantic rules.

d) Performing dynamically a given scenario is the goal of  the proposed messaging system.

5. VALIDATION

5.1. TARGET

5.1.1. Use the power of information to: explore social and economic life on Earth and discover options for a sustainable future.

5.1.2. HOW?

a) Use Iterative approach to reach convergence and resolution.

5.1.3. WEAKNESS

a) To reach False Convergence, instead of Convergence.

5.2. RECOMMENDATION

a) Use successive approach mechanism with procedure by elimination.

b) Associate “Learning by doing” approach with an Engine to: trace player behaviour, evaluate its capabilities and quickly meet its needs.

c) Build a “Knowledge Accelerator” with that Engine to handle large volume of accessible data for each conflict[iv] and run large number of iterations to reach convergence of the stochastic model.

KEYWORDS :

Data certification[v] ; virtual learning; e-learning[vi]virtual tutor; (Artificial Intelligence) paradigms, such as Case-Based Reasoning[vii], Bayesian Inference and Intelligent Agents Simulation; universal engine;  SPDF (Standard Process Description Format)[viii];  semantics[ix]; BPEL; WSDL; predictable and unpredictable events; Standard Model; Higgs particles; world business collaboration; Chaotic triggers; morality; global order

 



[ii] To learn more about successful implementation of SPDF scenarii http://www.ediaudit.fr/

 

[iii] "In probability theory", a purely stochastic system is one whose state is non-deterministic (i.e., "random") so that the subsequent state of the system is determined probabilistically. Any system or process that must be analyzed using probability theory is stochastic. Stochastic systems and processes play a fundamental role in mathematical models of phenomena in many fields of science, engineering, and economics.

 

[iv] The Stochastic model we developed is used for conflict resolution ; since it manipulates BIG DATA, its application cover any case study in particular «data economy » the object of Claude Robinson study

[v] An Implementation of EDIAUDIT on a data certification server 
Amin Elsaleh, Managing Director, VANEDI Ltd, UK http://xml.coverpages.org/paris9802.html

The implementation of the EDIAUDIT tool on a data certification server is based on the bridging concept between EDI and SGML. This enables the control of interdependency rules which might exist between data and provides additional features for securing business transactions. The attached reference provide  detailed description of how e-government may use our data certification concept :

http://www.mlfcham.com/v1/index.php?option=com_content&view=article&id=1005&Itemid=1468

 

[vi] An e-learning scenario is in fact like a traditional lesson, and the ideal solution is to simulate a teaching-learning relation with a virtual teacher able to interact with the learners and to instruct them (Prodan et al., 2008). A good traditional teacher learns all the time from previous didactic experiences. Based on this historical feedback, the teacher exploits prior specific successful episodes, and avoids prior failures. We introduce a similar feedback mechanism in our technology of elaborating e-learning scenarios. The feedback information, collected from learners’ remarks and from prior results and successes, is stored in case bases. The relevant cases are retrieved and adapted to fit new situations from new e-learning scenarios, or to improve the previous ones. In addition, our approach in creating an e-learning scenario relies upon a special sort of goal oriented intelligent agents. The generation of the e-learning scenarios :

www.intechopen.com

E-Learning Tools as Means for Improving the Teaching-Learning Relation 51 (Nwana, 1996), able to incorporate knowledge, teaching methods and pedagogical characteristics into e-courses (Kanuka 2008). We intend to implement a simulation of some intelligence based actions and initiatives, that are to be incorporated into e-learning scenarios, with the purpose to map, to plan and to monitor the pace and the progress of a learning process. Following the traditional model, the cases of positive experiences from previous e-learning scenarios are stored into case bases created with XML and CBR (CaseBased Reasoning) technologies (Leake, 1996). An e-course consists of a set of e-learning scenarios, each e-learning scenario being generated by virtue of some well-defined learning objectives. To generate intelligent and practical elearning scenarios for a particular e-course, we created first a particular infrastructure containing the knowledge from particular domains of the target e-course. For this purpose, we generated consistent Java and XML based knowledge bases, containing integrated knowledge of the best teachers. In addition, we implemented in Java a set of simulation algorithms describing real world phenomena, processes and activities we have to include in e-courses. When generating a new e-learning scenario, we use a feedback mechanism based on our historical experience from previous e-learning scenarios). Also, we use Java technologies to generate intelligent and practical e-learning scenarios, based on new AI (Artificial Intelligence) paradigms, such as Case-Based Reasoning, Bayesian Inference and Intelligent Agents (Prodan et al., 2006, 2008, 2010). A learner can access the e-course and launch an e-learning scenario either locally, or via WWW in a context of distance learning. http://cdn.intechweb.org/pdfs/27919.pdf

 

[vii] We believe that SPDF is the issue vs CBR in any scenario based on AI, and it is more in compliance with Stochastic modeling approach, same reasoning apply to SGML (Standard Generalized Markup Language vs XML (Microsoft subset) which has bounded the power of SGML and slowdown its AI tool extension.

[viii] To learn more about SPDF, it has been used as standard for security in e-commerce. The following description published at http://www.cheshirehenbury.com/ebew/e2000abstracts/section2.html

Explains HOW : We start developing  new generation of servers for e-commerce oriented towards three standards association: SGML-EDI-JAVA. This association enabled us to build a certification tool for EDI messages supported by a knowledge database that is unique for each business type and a dynamic routing engine to provide communications with on-line users. For two years we populated one of those knowledge databases dedicated to the Insurance sector and we built in parallel a new security standard based on two concepts: the data interception in a structured document and their verification according to security rules applied to the intercepted data. This security standard consists of a set of expressions which allow to populate the knowledge database for any commercial sector (namely Insurance, Distribution, Banking and others). It also allows consequently to migrate any type of traditional business to e-commerce with that guarantee of full data reliability exchanged between the involved partners in a business transaction and a full tracing of all the documents exchanged during the business transaction lifecycle with the supply of automatic reporting including those who might trigger the rejection of non-coherent or fraudulent documents. We believe with that new standard we would be able to provide a valuable type of content to all proposed Portal solutions ; the knowledge database for each business type associated to a new type of security; the business rules which are not public and are exclusively under the control of the executive management within a given company.

[ix] Reference in the Maritime Industry MIIS (Multimedia Information Interchange System) http://www-ioa.epcon.gr/ecommerce/abstracts.htm

 

. Amin Elsaleh

VANEDI, France

 

Last Updated on Friday, 14 July 2017 14:03
 

Promotion 1963

MLFcham Promotion 1963

Giverny - Mai 2004

MLFcham Giverny - Mai 2004

Athènes - Oct 08

MLFcham - Athènes - Octobre 2008

Promotion 1962

MLFcham Promotion 1962