Challenges, opportunities, and future perspectives

Big Data for supply chain management in the service and manufacturing
sectors: Challenges, opportunities, and future perspectives
Ray Y. Zhong a,⇑
, Stephen T. Newman b
, George Q. Huang c
, Shulin Lan c
aDepartment of Mechanical Engineering, The University of Auckland, Auckland, New Zealand
bDepartment of Mechanical Engineering, University of Bath, Bath, UK
c HKU-ZIRI Lab for Physical Internet, Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong, Hong Kong
article info
Article history:
Available online 15 July 2016
Keywords:
Big Data
Service applications
Manufacturing sector
Supply Chain Management (SCM)
abstract
Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for
the notion of Big Data. This paper investigates representative Big Data applications from typical services
like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector.
Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and
manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges,
opportunities, and future perspectives such as data collection methods, data transmission, data storage,
processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by
academia and practitioners when implementing Big Data analytics in the service and manufacturing
sectors.
2016 Elsevier Ltd. All rights reserved.
1. Introduction
Service sector refers to an involvement of service provision to
businesses and final consumers such as finance, healthcare, tradesmanship, tourism, computer services, and restaurants (Reijers,
2003). Manufacturing sector is the production of merchandise
using raw materials, labors, machines, tools, and so on, referring
a range of human activities (Ferdows & De Meyer, 1990). Service
and manufacturing sectors play important roles in the economy
where service consists of the ‘‘soft” part i.e. activities where people
offer the knowledge and time to improve performance, sustainability, productivity, and potentiality; while, manufacturing includes
the ‘‘hard” part i.e. activities where people use the machines and
tools to transform raw materials to finished goods, move goods
from manufacturers to retailers by vehicles, and carry out disposal
of recycling of used goods (Brouthers and Brouthers, 2003).
Nowadays, the service and manufacturing sectors are facing a
data tsunami. It was reported from International Data Corporation
that over 1600 Exabytes data were created in 2015 from both sectors. Data volume continues to grow tremendously in part because
service and manufacturing sectors have much more workforce are
progressively being gathered by advanced information technologies such as ubiquitous-sensing mobile devices, aerial sensory
techniques, cameras, microphones, Internet of Things (IoT) technologies (e.g. RFID, Barcode), and wireless sensor networks
(Alfalla-Luque, Marin-Garcia, & Medina-Lopez, 2014;
Dimakopoulou, Pramatari, & Tsekrekos, 2014; Zhong, Huang, Dai,
& Zhang, 2013). Such datasets are so immense and complex that
it is challengeable to handle using on-hand database management
tools or traditional processing applications.
‘‘Big Data”, originally referred to the huge flood of data in the
range of exabytes and beyond, has extended the scope of technological capability to store, manage, process, interpret, and visualize
the amount of data (Kaisler, Armour, Espinosa, & Money, 2013).
The concept of Big Data is first exposed in an article from the
ACM digital library in October 1997 (GilPress, 2012). It was used
to define a visualization challenge for systems with quite large
datasets in computer science. Since then, it has attracted great
attention from both academia and practical fields. Big Data has
greatly stimulated the demand of specialists in the information
management so that AG, Oracle, IBM, Microsoft, SAP, EMC, HP
and Dell have spent more than $15 billion on data analytics and
processing (Syed, Gillela, & Venugopal, 2013). Big Data is supposed
to be one of the top 10 prosperous markets in the coming century.
http://dx.doi.org/10.1016/j.cie.2016.07.013
0360-8352/ 2016 Elsevier Ltd. All rights reserved.
⇑ Corresponding author.
E-mail addresses: [email protected]il.com (R.Y. Zhong), [email protected]
(S.T. Newman), [email protected] (G.Q. Huang), [email protected] (S. Lan).
Computers & Industrial Engineering 101 (2016) 572–591
Contents lists available at ScienceDirect
Computers & Industrial Engineering
journal homepage: www.elsevier.com/locate/caie
In 2010, it was estimated that this industry on its own was worth
over $100 billion and was growing about 10% a year (Weng &
Weng, 2013). From the statistics, it is estimated that the global
Big Data market will reach $118.52 billion by 2022 growing at a
compound annual growth rate of 26% during the forecast year from
2014 to 2022 (NewsOn6.com).
Service and Manufacturing Supply Chain Management (SMSCM) have been undertaking digitization for several decades. Since
SM-SCM are largely involved in a range of human activities from
aeronautical facilities to daily necessities, the performance and
efficiency are significant which has driven the initialization of Big
Data (Eichengreen & Gupta, 2013). SM-SCM twining with Big Data
is able to create better decision-making mechanisms when using
natural resources. To this end, large funding initiatives such as Digging into Data explicitly encourage researchers and practitioners to
engate in studies which are able to lead better understanding,
development, and applications of Big Data.
For a long time, SM-SCM have focused on the collection and
storage of enormous data (Dekker, Pinçe, Zuidwijk, & Jalil, 2013).
However, it is facing great challenges when contemplating to make
full use of such data. In this paper, the challenges are summarized
as ‘‘5V” given the typical characteristics of SM-SCM.
Volume: An enormous amount of data is generated tremendously every second within SM-SCM from all over the world.
For example, it is calculated that a personal care manufacturer
generates 5000 data samples every 33 ms, resulting in
152,000 samples per second, 9 million per minute, 13 billion
per day, and 4 trillion samples per year (Markopoulos, 2012).
The accumulation of bigger and bigger data sets floods the data
collector, transfer networks, and storage facilities.
Velocity: The velocity of processing such huge data set from
SM-SCM is significant because data-driven decisions should be
made as quickly as possible. The velocity mainly relies on the
speed of data collection, reliability of data transferring, efficiency of data storage, excavation speed of discovering useful
knowledge, as well as decision-making models and algorithms.
Variety: The vast data from SM-SCM are usually variable due to
the diverse sources and heterogeneous formats. New types of
data will proliferate from various sensors used in manufacturing sites, highways, retailer shops, and facilitated houses. Integrating such diverse data into standard formats requires a
more general and complex makeup language.
Verification: There is a great myriad of bad data (e.g. noises,
inaccurate attributes, etc.) from SM-SCM Big Data which should
be verified so that good data could be picked out. The verification usually has to be carried out under certain authorities
and security levels. Thus, the verification process designed
and developed as tools to automatically verify the quality and
compliance issues may consider different situations, some of
which may be so complex that it is challengeable to address.
Value: The value of Big Data is difficult to evaluate in SM-SCM.
Firstly, extracting value from Big Data is tough because of the
hurdles caused by the previous four factors. Secondly, it is challengeable to examine the impacts on the insights, benefits and
business processes within both sectors. Thirdly, the value of
reports, statistics, and decisions obtained from Big Data is hard
to measure due to the large influences on micro and macro
perspectives.
In order to deeply understand the Big Data and stimulate potential techniques to tackle the challenges in SM-SCM, this paper presents a comprehensive investigation of Big Data for SM-SCM. This
paper mainly contains four sections: representative Big Data applications in SM-SCM, Big Data technologies, current movement
world-wide, as well as current challenges, opportunities, and
future perspectives. Key areas in SM-SCM such as finance and economics, Logistics and Supply Chain Management (LSCM), and manufacturing sector are focused in Section 2. Section 3 concentrates
on storage, data processing, data visualization technologies, Big
Data analytics, and Big Data models for decision-makings which
are crucial concerns in SM-SCM. The next section reviews the
major region’s current movements including North America, Europe, and Asia Pacific. A major discussion is provided in Section 5,
of the current challenges, opportunities, and future perspectives
in data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decisionmaking models as well as Big Data interrelation and applications.
With Section 6 identifying a number of conclusions from the investigation, insights and lessons are explored.
2. Representative Big Data applications in SM-SCM
Illustrative examples of Big Data applications in SM-SCM are
presented including finance and economics, healthcare, biology,
IT sector, Logistics and Supply Chain Management (LSCM), and
manufacturing sector are reviewed.
2.1. Finance and economics
It is estimated that the global Big Data in financial services sector will grow at a compound annual growth rate of 56.69% over the
period 2012–2016 (Globe-Newswire, 2013). In most financial institutions like banks, insurance companies, and brokerage firms, Big
Data is common due to the enormous transactions and activities.
That directly affects the way how individuals, groups, and organizations manage scarce resources. Garel-Jones (2011) reported that
how the financial service sector is able to use Big Data analytics to
predict the client behaviors, unlocking the insights in the data to
better understand customers, competitors and employees to gain
competitive advantage. Predictive modelling and real-time decision making play a critical role in seeking a winning edge in the
dynamic markets for financial institutions (Peat, 2013). In order
to examine the market volatility in financial markets, a Big Data
approach was proposed to compute Volume-synchronized probability of informed trading (VPIN) (Wu, Bethel, Gu, Leinweber, &
Ruebel, 2013). For a CFO (Chief Financial Officer) who is the key
entity for strategic decision making, Big Data offers an opportunity
to use the information, trends, and knowledge hidden in the large
data sets. To this end, a business intelligence and analytics tool was
introduced to use the Big Data for assisting CFO in seeking better
data, providing bigger value, and making greater decisions (Chen,
Chiang, & Storey, 2012).
It is believed that the capacity for analyzing big financial data is
core to successful competition, thus, a large number of financial
institutions have taken the initiative on implementing Big Data
technologies. For instance, in 2012, a number of financial institutions such as European Hedge Fund, Global Investment Bank, Retail
Banking Innovation Leader, Asia/Pacific National Bank, Expanding
U.S. Property Insurer, Global European Institution, Investment
Research Institution, and Community Bank used Big Data to successfully obtain different achievements such as optimization of
price discovery and investment strategies for large portfolio trades
and swaps, creation of merchant intelligence and assistance in
optimizing offers and pricing to retail customers, tracking social
media into finely tuned market campaigns, etc. (Versace & Karen,
2012). Fang and Zhang (2016) investigated the Big Data solutions
for the pioneers of the financial practitioners to gain actionable
insights from massive market data.
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 573
2.2. Healthcare
With the rising adoption of digitized records and physicians’
notes in healthcare, analytics for generated Big Data is very significant. IBM Watson might be one of the pioneers who utilize cloud
database and supercomputers to process a large volume of data so
that doctors can use the mined information or knowledge for personalized medical treatments (Joshi & Yesha, 2012). Dell recently
collaborated with Translational Genomic Research Institute (TGen)
to establish the world’s first personalized medicine trial for pediatric cancer using Big Data and cloud infrastructure (O’Driscoll,
Daugelaite, & Sleator, 2013). GNS Healthcare, a data analytics company, together with Covance, a research organization, adopted Big
Data to deal for diabetes clinical trial modelling (Harrison, 2012).
The collaborated entities use immense data gained from Covance
supporting pre-clinical and clinical drug development with GNS
Healthcare’s reverse engineering.
Big Data is even embedded into our daily-life that we are able to
use the website for examining our experiences of illness and
healthcare. Lupton (2013) reported a plethora of interactive digital
platforms under the phenomena of Big Data and metric assemblages. Taking the pharmaceutical industry for example, data collections are on a vast volume so that Cloudera, a service provider
from U.S. helps these companies to administer the collection of
data (Schultz, 2013). Currie (2013) gave an appropriate use of the
vast administrative data such as hospital discharge abstracts, and
insurance claims in pediatrics research for people to answer
important questions. In one specific case of diabetes as an example,
a system so-called DiabeticLink has been implemented both in U.S.
and Taiwan to take the advantage of Big Data to build up a health
social media system (Chen, Compton, & Hsiao, 2013). The big
healthcare data was used to improve therapeutic effectiveness
and safety (Schneeweiss, 2016). Comprehensive investigations on
Big Data in healthcare could be observed from literature so that
more opportunities could be exposed in both academia and
practices (Feldman, Martin, & Skotnes, 2012; Raghupathi &
Raghupathi, 2014).
2.3. Supply Chain Management (SCM)
The global logistics industry has a large ever-growing amount of
Big Data and is flooded with real-time data ranging from smart
phones, sensors, and digital machines to B2B data exchanges. Such
Big Data brings a new source of competitive advantages for logistics involvers to carry out supply chain management so as to obtain
enhanced visibility, the ability to adjust under demand and
capacity fluctuations in a real-time basis, as well as the insights
into customer behaviors and patterns to achieve smarter pricing
and better products (Swaminathan, 2012). The Council of Supply
Chain Management Professionals (CSCMP) thus is currently
pursuing two complementary projects which aim to address what
Big Data means for logistics and supply chain management
(CSCMP, 2014). One is ‘‘Big Data: What does it mean for Supply
Chain Management?” carried by Mark Barratt (Marquette
University), Annibal Camara Sodero (University of Arkansas), and
Yao Jin (University of Arkansas) to: (1) develop a taxonomy of data
sources including Big Data that could be considered for decisionmaking in retail supply chains; (2) explore the current and potential use of Big Data analytics in and across multiple industries; and
(3) identify the general and particular implications of the use of Big
Data. The other is ‘‘The What, How and Why of Big Data in Supply
Chain Relationships: A Structure, Process, and Performance Study”
collaborated by R. Glenn Richey (The University of Alabama), Chad
W. Autry (The University of Tennessee), Frank G. Adams
(Mississippi State University), Tyler R. Morgan (The University of
Alabama), Kristina Lindsey (The University of Alabama), and Taylor
Wade (University of Alabama). Certain investigations for research
and applications were carried out by Wang, Gunasekaran, Ngai,
and Papadopoulos (2016) for reviewing the Big Data analytics in
logistics and supply chain management. Recently, a special issue
edited by Sanders and Ganeshan in POMS (Production and Operations Management) focused on the Big Data in Supply Chain
Management.
The logistics leading companies like DHL and UPS have took
some steps in Big Data field to enhance their competitiveness in
global LSCM. Table 1 shows the current movements of the world’s
top 10 logistics companies for 2013 in implementing Big Data
strategies.
2.4. Manufacturing sector
Manufacturing sector is being flooded by a huge amount of data
since various sensors, electronic devices, and digital machines are
used in production lines, shop floors, and factories (Zhong et al.,
2015). This sector keeps more data than other sectors, estimating
close to 2 Exabytes of new data stored in 2010 (Nedelcu, 2013).
It was reported by IDC that, over the next several years, manufacturers increasingly plan to use service as a competitive differentiator along with Big Data analytics for a long-term profitable revenue
(Spotfire, 2013). Executives at leading-edge manufacturing enterprises are leveraging Big Data to optimize operations and work
out strategic decisions on a real-time basis (Khatri, 2013). Merck,
a pharmaceutical firm specializing in producing vaccines, uses
the Big Data analytics to optimize its manufacturing (Henschen,
2014). Raytheon Corp. has used Big Data to enable smart factories
which are based on the powerful capacity of managing information
from various sources of data such as CAD models, sensors, instruments, Internet transactions, simulations, and digital records in
the company (Noor, 2013). Itron, a water meter manufacturer
and Schmitz Cargobull, a German truck body and trailer maker,
use Big Data analytics to provide ‘‘smart grid” solutions to make
itself indispensable to its customers and to supervise the maintenance, cargo temperatures, and routes by its trailers respectively
(Stephen, Serguei, & Arnd, 2014). GE currently issues a white paper
which discusses the means of Big Data for the industrial sector that
is able to get benefits from Big Data to significantly improve operational performance (GE, 2014). According to the Japan Times, Toyota Motor carried out a live traffic service using Big Data to target
at local governments and business from 700,000 Toyota vehicles
on roads (Phneah, 2013). This service may bring 16 billion USD
per year to Toyota without underestimation. Siemens using the
Big Data from some 100,000 measurements every day in the power
plants around the world built up remote diagnostic services (RDS)
to analyze the operational behaviors (Siemens, 2014). Rolls-Royce,
the engine manufacturer, took Big Data to a higher gear with the
Engine Health Monitoring Unit (EHMU) that uses the sensor collected data from different components, systems, and manufacturing lines to improve the product quality (BigData-Startups, 2013).
Ramco Cements Limited (RCL), an Indian flagship company
founded in 1938, used Big Data for managing its product development, operations and logistics (Dutta & Bose, 2015). A comprehensive investigation of Big Data in manufacturing has been carried
out to evaluate the potentiality such as analysis of crucial industry
attributes, solutions, and case studies from 2014 to 2019 (MindCommerce, 2014).
3. Big Data technologies and models
Current Big Data technologies in terms of storage (preprocessing), data processing, data visualization (post-processing),
Big Data analytics, as well as models and algorithms for decision574 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
makings have been identified since they are the main concerns in
the application of Big Data in SM-SCM.
3.1. Storage technology
With the great myriad of electronic devices widely used in SMSCM, enterprise organizations have to store an accumulative evergrowing of data, so that queries, analysis, and visualization of data
could be executed. Storage technology is one of the essential
aspects which involve hardware devices and storage systems or
mechanisms.
For decades now, some industries run large data sets over high
performance computing (HPC) systems where decision-making
would interrogate these data for business value (Adshead, 2013).
However, Big Data today commonly deals with very vast unstructured, diversified, and heterogeneous data sets, which are dependent on rapid and efficient storage. For the largest Big Data
practitioners like Google, Facebook, Apple, and Twitter, hyperscale
computing environments (HCE) have been adopted to run the likes
of Hadoop, NoSQL, and Cassandra, which have PCIe flash storage or
in addition to disk storage to cut down storage latency to a minimum (Chen, Mao, Zhang, & Leung, 2014). For some small and
medium-sized enterprises (SMEs) that need the ability to address
relative Big Data storage, the systems with the attributes required
are usually scale-out or clustered NAS (Network-attached Storage),
which is based on file access shared storage by using parallel file
systems distributed across many network storage nodes (Raman,
2014). Specialized suppliers of the clustered NAS solution like Isilon (U.S. Founded in 2001), BlueArc (U.S. Founded in 1998), Huawei
(China. Founded in 1988), NetApp (U.S. Founded in 1992), Gluster
Inc. (India. Founded in 2005), EMC (U.S. Founded in 1979), and
Hitachi (Japan. Founded in 1910), are able to provide reliable solutions, which can handle a huge number of files without the kind of
performance degradation that happens with ordinary file systems
as they grow. It was reported that the first two companies have
been bought recently by the last two big storage suppliers respectively whom are contemplating taking up the market (Higginbotham, 2010; Network Computing, 2011).
The other Big Data storage method is object-based mechanism
that each file has an identifier which indexes the data and its location. Object storage systems are able to scale to very high capacity
and enormous files so that enterprises can take the advantage of
Big Data as the DNS way of doing things over the Internet
(Mesnier, Ganger, & Riedel, 2003). Take a typical object storage system ObjectRocket from RACKSPACE for example, end-users are
benefiting from low latency, high performance and availability,
as well as flexible scalability over cloud through 6 data centers
across the world – the UK, Hong Kong, Australia, and three in
USA. Due to the advantages of object-based storage, many companies like Amazon, Windows Azure, at&t, Facebook, and Google
Developers are adopting this method by integrating cloud technologies such as openstack, Cleversafe, AMPLIDATA, caring, nirvanix, and ATMOS (O’Connell, 2013). A recent survey of the
current state of the art in Big Data storage was carried out by
Strohbach, Daubert, Ravkin, and Lischka (2016), aiming to create
a cross-sectorial technology road map.
3.2. Data processing technology
While the technologies of Big Data processing is broad and
encompasses many trends and emerging techniques like framework, data query, and cleansing. In this section, the data processing
excludes data storage and visualization so as to differentiate three
major stages of Big Data: pre-processing, processing, and postprocessing.
One such technology from Apache Hadoop is an open-source
software framework written in Java for processing large-scale data
sets (Dittrich & Quiané-Ruiz, 2012). The framework comprises several modules that are Hadoop Common, Hadoop distributed file
system (HDFS), Hadoop YARN, and Hadoop MapReduce, which
are designed in capable of automatically handling in software
when hardware failures commonly occur. Hadoop claims a number
of significant advantages, from successful examples of applications
Table 1
Current movements in big data from top 10 logistics companies.
No. Company name Country Current movements References
1 The United States Postal
Service (USPS)
United States A Portal service with an architectural approach is designed to rapidly ingest and
process Big Data culled from the postal enterprises. USPS has relocated the database
to in-memory for meeting the real-time requirement
MeriTalk (2014)
and Rath (2013)
2 DHL Express Germany DHL currently launched a new trend report entitled Big Data in Logistics, proposing a
Big Data solution to showcase the operational efficiency, customer experience, and
new business models
Martin, Moritz,
and Frank
(2013)
3 United Parcel Service of
North America, Inc.
(UPS)
United States It was reported that UPS spends $1 billion a year on Big Data from Dave Barnes, chief
information officer at UPS, to change the shipping business and cutting fuel and
company costs
Bloomberg
(2013)
4 A.P. Moller – Maersk
Group (Maersk)
Denmark The Strategic R&D Department developed an innovation project to analyze Big Data
assets from the customers so as to achieve in close collaboration with key suppliers
and Maersk Business Units
Fawcett and
Waller (2013)
5 FedEx Corporation United States FedEx has made the success in enabling customers to track packages in real-time
through all about Big Data real
Capron (2013)
6 La Poste France Supported by Sopra ISD, La Poste Courrier has proposed its platform solution based
on the Big Data. It is a new version of search engine using CloudView technology
Crochet-Damais
(2013)
7 China Ocean Shipping
(Group) Company
(COSCO)
China COSCO surged into new markets with IBM and SAP to increase competitiveness and
stimulate business growth using the Big Data
IBM-SAP (2013)
8 Japan Post Japan A business plan was proposed to move the steps into new areas like expanding its ecommerce activities and developing more commercial real estate to take advantage
of its Big Data from its customers
Fukase (2013)
9 Nippon Express Co., Ltd. Japan NEC currently launched a project based on OpenFlow network control technology to
manage its Big Data as a datacenter over cloud networks so as to improve efficiency
and reduce operating costs
NEC (2012)
10 Royal Mail Group
Limited
United Kingdom of Great
Britain and Northern
Ireland
Currently, Royal Mail Group invested in a multi-function capacity to use the Big Data
methods to drive innovation in the products, services, and operations
LMG (2014)
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 575
with a large number of big companies. On February 19, 2008,
Yahoo! launched the world’s largest Hadoop production application that runs on more than 10,000 clusters and produced data
used in every Yahoo! Web search query; in 2010, Facebook
claimed that the largest Hadoop cluster in the world with 21 PB
(Petabytes) was achieved, and two years later, on June 13, 2012,
the data from Hadoop cluster reached 100 PB (HDFS, 2010; Ryan,
2012). Currently, Hadoop has been widely adopted with more than
half of the Fortune 50 for Big Data processing (Eatontown, 2012;
Tripathy & Mittal, 2016).
MapReduce is one of the key modules in Hadoop. It is a programming paradigm which allows the models and associated
implementations for processing vast data sets with large number
of distributed clusters of servers (Lin, 2013). Models and algorithms play critical role in MapReduce which is able to use them
to handle Big Data efficiently. Examples include web analytics
applications, scientific applications, and social networks (DeLyser
& Sui, 2013; Dittrich & Quiané-Ruiz, 2012; Jitkajornwanich,
Gupta, Elmasri, Fegaras, & McEnery, 2013; Xu et al., 2016). Early
MapReduce technologies suffered from performance problems
which have caused severe criticisms in its low-level language performing, record manipulation and a lack of schema support, however, today these are becoming history with the support from
industry heavyweights like IBM, Microsoft, and Oracle. The widespread adoption of this technology in organizations is from tiny
group startups to Fortune 500 enterprises due to its substantial
advantages (Bechini, Marcelloni, & Segatori, 2016).
With the emergence of cloud computing technology, Big Data
processing in the cloud has attracted huge attention due to its significant advantages of converged infrastructure and shared services. Ahuja and Moore (2013) comprehensively discussed the
approaches including compression, data deduplication, caching,
and protocol optimization to establish a cloud computing environment. In the cloud computing environment, Big Data processing
includes the cloud computing platform, cloud architecture, cloud
database, and storage scheme (Hashem et al., 2015; Ji, Li, Qiu,
Awada, & Li, 2012). In recent years, Fujitsu Laboratories are working on designing and developing core and application-promoting
technologies in a cloud environment for processing the Big Data
(Tsuchiya, Sakamoto, Tsuchimoto, & Lee, 2012). Two fundamental
technologies on which they are concentrating are distributed data
storage and complex event processing (CEP) as well as workflow
description for distributed data processing. These technologies
are able to realize high-speed aggregation with distributed key
value store (KVS) and to derive valuable information through a
simple and productive environment using DISPEL respectively. In
order to handle the large data volume and processing velocity, an
open-source real-time Big Data processing framework named
Flume that is capable of reining faster, larger, and easier processing
data in cloud (Wang, Rayan, & Schwan, 2012).
3.3. Data visualization technique
One of the key purposes of Big Data analytics is to excavate
implicit information and knowledge from immense data sets for
supporting decision-makings so as to achieve business success,
healthcare treatments, cyber security, disaster management, etc.
Thus, Big Data visualization technology is crucial because the vast
data generated from SM-SCM must be formatted, standardized,
and interpreted to help scientists, practitioners, and other endusers to extract the invaluable assets from visualized tools. To this
end, visual analytics of Big Data can be improved by the computational methods. Choo and Park (2013) introduced an iteration-level
interactive visualization approach to envisage intermediate results
and allow users to interact with those results in a real-time basis,
finally to customize computational methods for visual analytics
with Big Data. An object-oriented approach was proposed for computational earth science to transform the Big Data into insight to
help better understand, describe, and model the existence of the
entity such as a storm, an earthquake, or an ecological region
(Sellars et al., 2013). This visualized method has been widely used
in earth and environmental science fields due to its featured computational advantages, for instance tornado forecasting, weather
prediction and research geographical terrain morphology. A coupled tools approach for scientific Big Data visualization was proposed for the analysis of large scale time-dependent particle type
data (Artigues et al., 2015).
Preliminary examples of the Big Data visualization heavily reply
on the application background. Chevron, an American multinational energy corporation, used TIBCO Spotfire to develop an iRAVE
(Integrated Reservoir Analysis and Visualization Environment)
which exposes the huge data from the oil fields and presents in a
visual analytics environment to review the company’s oil and gas
fields (IntelCenter, 2013). Pfizer data scientists used the TIBCO
Spotfire to visualize the multitude of data from more than 500 clinical trials around the world for safety, efficacy, and operations
tracking (O’Connell, 2010). In text visualized analytics, Leximancer
and Discursis developed at the University of Queensland were
reported in terms of thematic analysis and sequential analysis so
as to support the analytic decision-making (Angus, Rintel, &
Wiles, 2013). These software tools use word frequency statistics
to create their respective visualizations so that end-users are able
to determine the main topics from the texts. Google Fusion Tables
(GFT), a Big Data collaboration and visualization tool, offers cooperative data management in the cloud to possess large dataprocessing resources (Madhavan et al., 2012). Interactive maps
using GFT are with highly success that world prominent news like
UK Guardian, Los Angeles Times, Chicago Tribune, and Texas Tribune published a set of articles for reporting the visualized Big
Data application. Truthy, a visualized and analytical system for collecting and analyzing political discourse on Twitter, was adopted
by Indiana University School of Journalism to study social behavior
at a new scale (McKelvey, Rudnick, Conover, & Menczer, 2012).
This system is an interactive dashboard for studying communication processes on Twitter microblogging platform, which has the
ability to produce high-level statistical and visual overview of vast
communication networks.
Data visualization is everywhere and after visualizing and analyzing Big Data is much more valuable. Thus, open-source data
visualization tools are preferred by the end-users for free application which could be easily conjunction with the existing systems in
a company (Zikopoulos & Eaton, 2011). Zhong, Lan, Xu, Dai, and
Huang (2016) introduced a visualization tool for the RFID Big Data
from the cloud-enabled manufacturing shop floors. Lurie (2014)
introduces 39 types of data visualization tools for Big Data analytics that aims to serve different end-users needs not only for easy
drag-and-drop functionality with simple application, but also for
complex business intelligence with sophisticated data analysis.
3.4. Big Data analytics
Big Data analytics is a term that refers to the processes of examining large amounts of data with variable types to uncover hidden
patterns, unknown correlations and other useful information/-
knowledge (Rouse, 2012). It has been attracted great attentions
due to its ability of providing the patterns, information and knowledge to increase business benefits, improve operational efficiency,
and explore potential market (LaValle, Lesser, Shockley, Hopkins, &
Kruschwitz, 2013). With the primary goal to assist companies in
making better business decisions, Big Data analytics allow users
to analyze huge volumes of data from different sources such as
the database, the Internet, mobile-phone records and locations,
576 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
and as well as sensor-captured information. In order to analyze
such huge data with various formats, the technologies associated
with Big Data analytics cover a wide range and form a core of open
source software frameworks which are able to support the analysis
of large amounts of data sets across clustered systems (Zakir,
Seymour, & Berg, 2015). This section reports on the Big Data analytics related tools, techniques, and practices.
Big Data analytics is a new fact of business life which requires
some tools in place for handling huge volumes of data sets. Such
tools are mainly used to identify trends, detect patterns and glean
invaluable findings. To achieve marketing analytics, fraud detection and financial risk assessment, a mix of business intelligence
(BI) tool has been used (Stedman, 2014). This tool integrates Oracle’s Big Data appliance and Cloudera’s Hadoop distribution for
these purposes. Typical tools include GridGain for processing large
amounts of real-time data, HPCC (High Performance Computing
Cluster) for real-time calculations, Storm for dealing with huge
data sets with distributed real-time computation capabilities, and
Talend for providing a number of BI service (Loshin, 2013).
Harvey (2012) investigates the 50 top tools which are categorized
into platforms and tools, databases/data warehouse, BI, data mining, and programming languages, which enable Big Data analytics
to help enterprises to improve their processes and performance.
Big Data analytics tools are enabled by suitable techniques
which are attracted by plenty of large IT companies who are able
to provide series of core technologies and solutions for different
applications. Oracle Advanced Analytics (OAA), combining of powerful in-database and open source R algorithms, enables predictive
analytics, advanced numerical computations and interactive
graphics (Oracle, 2013). SAP High-Performance Analytic Appliance
(HANA) uses parallel multicore processor technique to manage
huge database so as to provide various predictive analytics solutions such as customer movements and market fluctuations (SAP,
2014). SAP HANA is comprised of a TREX (Text Retrieval and information EXtraction) search engine, an in-memory OLTP (Online
Transaction Processing) database and a new architecture to enable
users to advanced decision-making. Microsoft, to fully realize the
Big Data analytics, provides a complete platform technique to integrate models, analyze and visualize tools to scale analysis companywide (Microsoft, 2014). Power BI for Office 365 and Microsoft
Azure are two typical products which use such platform technique
to build a visualization tool to uncover the patterns from huge data
sets. IBM SPSS Modeler, another extensive predictive analytics
platform technique, provides forecastable intelligence to assist
individuals, groups, systems and enterprise to make decisions
(IBM, 2013). By using a set of advanced algorithms, entity analytics,
decision management and optimization, IBM SPSS Modeler can
help in making the right decisions. Bhatti, LaSalle, Bird, Grance,
and Bertino (2012) gave an investigation on Big Data analytics
enabled techniques in terms of hardware and software
perspectives.
Practices of Big Data analytics have been widely reported currently. One of the chief goals is to make full use of the data to
achieve the ‘‘right data, for the right user, at the right time” (TDWI,
2014). Thus, different companies practice Big Data analytics given
their specific situations and issues. McKinsey & Company, a global
management consulting company, uses Big Data analytics to provide a rich set of services to various firms to achieve sustainable
improvements in performance. In financial area for example, using
Big Data analytics, this company helps small business banks to
analyze consumer behaviors to implement digital innovation and
upgrading services (Biesdorf, Court, & Willmott, 2013). Amazon,
the Seattle-based e-commerce giant, currently leverages Big Data
analytics to predict the customers’ behaviors so that the goods
could be shipped to them before they have even made a decision
to buy (Marr, 2014). Intel recently adopts Big Data analytics to
accelerate development and deployment of wearable apps with
data-driven intelligence through integrating a number of tools
and algorithms from Inter alongside cloud-based data management system (Bell, 2014). Wide range of Big Data analytics practices has been reported to improve customer relationship
management (CRM), increase profit margin, find potential market,
and carry out various predictions from the service and manufacturing sectors (Lardinois, 2014; Lohman, 2013; Salehan & Kim, 2016;
Talia, 2013; Vera-Baquero, Colomo-Palacios, & Molloy, 2013;
Zhong, Xu, Chen, & Huang, 2015).
3.5. Models and algorithms for decision-makings
Big Data models and algorithms from theoretical perspectives
are widely reported in these years to support the SM-SCM
decision-makings. This section categorizes these models or algorithms into Big Data presentation models, information and knowledge mining approach, and Big Data-driven optimization models.
Data presentation is crucial since the data generated from service and manufacturing are usually unstructured and heterogeneous. In order to facilitate the Tuple-based Big Data
presentation, a dynamic distributed dimensional data model
(D4M) was proposed to allow linear algebra in a database
(Kepner et al., 2012). There is large number of moving objects in
the SM-SCM. For identifying the moving data, with the integration
of precise transportation modes, a space and infrastructure based
concept was presented to model the Big Data of moving objects
(Xu & Güting, 2013). This data model is able to precisely represent
the location via a generic presentation. As the great myriad of sensors used in manufacturing and service sector, sensed data should
be formatted and standardized for further utilization. To this end,
an n-dimensional RFID-Cuboids model was innovatively introduced (Zhong, Huang, & Dai, 2014). This model twins the space
and time for labelling the logistics behaviors within the whole supply chain. Ontology is another presentation for identifying the Big
Data. An ontology-based data model for the intelligent handling of
customer complaints was presented to offer enterprises some useful information and knowledge (Lee, Wang, & Trappey, 2015).
Based on the data presentation approaches, invaluable information and knowledge could be mined. A new algorithm twining
semantic technology and cloud computing was proposed to reason
and process Big Data (Qu, 2012). The algorithm adopts resource
description framework (RDF) to present Big Data with the schema
of data being transformed into a finite semantic graph (FSG) that
could be analyzed by the reasoning algorithm for processing mass
data sets. In order to convert the Big Data into tiny data, an algorithm with constant-size coresets for k-means, principal component analysis (PCA), and projective clustering was introduced
(Feldman, Schmidt, & Sohler, 2013). Using the approach of
merge-and-reduce, this algorithm is able to achieve update time
per point that is polynomial in the cloud-based environment. For
facing the characteristics of Big Data from SM-SCM, a HACE theorem was introduced for finding out the customers’ interests (Wu,
Zhu, Wu, & Ding, 2014). This model involves demand-driven aggregation of information sources for mining and analyzing user interests. A Web semantic-based model was used to integrate ontology
for interprets manufacturing information and knowledge from
massive production data, which could be used for supporting
advanced intelligent manufacturing (Ramos, 2015).
In order to make full use of the mined information or knowledge, optimization models based on Big Data are investigated in
literature. Based on mean field analysis, a novel modelling
approach included a set of methods for approximate inference of
probabilistic models derived from statistical physics was proposed
to optimize the SCM system performance (Castiglione, Gribaudo,
Iacono, & Palmieri, 2014). Product lifecycle management (PLM) is
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 577
a typical application of using Big Data for optimizing each stage.
Various data from beginning, middle and end of life are converted
into knowledge to analyze each stage so as to achieve optimal services (Li, Tao, Cheng, & Zhao, 2015). For achieving an optimal SCM
network, artificial neural algorithm was used for network configuration through utilizing the Big Data (Stateczny & WlodarczykSielicka, 2014). Another optimization model for the supply chain
practice was conducted through using the Big Data analytics from
hashtag# supply chain and Twitter (Chae, 2015). For minimizing
the sum of a partially separable smooth convex function, parallel
coordinate descent methods for Big Data optimization was presented (Richtárik & Takácˇ, 2015). The methods allow for modelling
situations with busy or unreliable processors when the number of
blocks being updated at each iteration is random.
4. Current movements world-wide with Big Data
The current movements from major countries and districts are
reviewed so that a world-wide point of view on Big Data from
North American, Europe and Asia Pacific could be observed.
4.1. North America
On March 29, 2012, Obama administration announced an initiative which aimed to explore how Big Data could handle crucial
problems faced by the government. The significant initiative is
the Big Data Research and Development program composed of 84
different projects across six federal governmental departments
(Executive Office of the President, 2012). These projects were committed with more than $200 million to improve the ability to
extract knowledge and insights from large and complex collections
of Big Data. Among them, SM-SCM is a key part which seeks to
advance cutting-edge core technologies to collect, store, preserve,
manage, analyze, and share vast data sets; to harness such technologies to accelerate the pace of knowledge excavation in science
and engineering; and to expand the workforce needed to develop
and use Big Data technologies (Gianchandani, 2012).
As the big wave of governmental promotion of Big Data in SMSCM, the U.S. National Science Foundation (NSF) got the support of
a $25 million fund in core techniques and technologies for advancing Big Data Science and Engineering. NSF sought the research proposals focusing on one or more science and engineering aspects
such as data collection and management, data analytics, and escience collaboration environment in terms of appropriate models,
policies, and technologies targeted to services, emergency response
and preparedness, clean energy, and advanced manufacturing. It
can be found that, since 2005, there are 17,377 (US) and 26 (Non
US) projects awarded from the NSF, which are related to the Big
Data (www.nsf.gov/awardsearch/). The detailed awards in NSF
organizations are shown in Fig. 1, from which it could be seen that
service and engineering fields are the key concentrated areas by US
government. Directorate for Computer & Information Science &
Engineering, Directorate for Geoscience, and Direct for Mathematical & Physical Sciences, for instance, the awards are 3724, 3390,
and 2525 respectively which are the three most awarded directorates, taking up 52.63% of total awards from 2005 to 2015. That
could be an explanation why the service and engineering are so
energetic from the support of advanced decision-making and wise
responses to any situations based on Big Data techniques and
technologies.
In order to get deeper in touch of the movement of Big Data in
SM-SCM from the US NSF awards, statistics data from 2007 to 2015
are picked through advanced search results over the NSF official
website. Fig. 2(a) shows the awards yearly with sharply increasing.
It is found that, among these samples, only 21 projects were
awarded in the year of 2007. However, the number has reached
at 11,946 in 2015. Based on the sample data, the red1 square dot
line is a prediction analysis that indicates the linear increasing of
awards from NSF. From the prediction line, it could be estimated that
around 14,000 awards for Big Data in SM-SCM could be carried out
in the year of 2016.
Fig. 2(b) presents the awarded amount from 2007 to 2015. It is
observed that at the first three year (2007–2009), the award
amount increases significantly that each project was committed
by $0.40 M, $2.39 M, and $1.14 M respectively. The observation
implies that most of the awards focus on basic or common core
technologies which are able to profoundly influence the targeted
areas. After obtaining rich achievements from these primary
awards, the award amount stays stable increasing from 2010 to
2011. But in the year of 2012, the award amount decreases a little
bit. While, from 2013 to 2015, the amount increased sharply again
with increasing of over 100 million US dollars yearly. Though the
award amount is increasing year by year, the average amount
per award is $56,962.37 (2013), $57,605.19 (2014), and
$57,363.64 (2015). The green prediction line from the figure
demonstrates that the award amount will increase stably and the
total amount will be about $700 M in the year of 2016.
Canada, as one of another major country in North America, has
put great efforts into Big Data research and applications in SMSCM. The National Sciences and Engineering Research Council of
Canada (NSERC), as the major governmental foundation body, aims
to support on advanced studies, promotes, and discovery research
as well as to foster innovation in Canadian companies in postsecondary research projects. It could be found that total 7509 projects
are awarded with the keyword Big Data searching from the database between the year 2005 and 2013. Table A1 shows the statistics of the awards covered the topic of Big Data in different areas
from 2010 to 2014 according to the database from NSERC (http://
www.nserc-crsng.gc.ca/).
From 2010 to 2013, there are total 4101 projects awarded from
NSERC, and the area of advancement of knowledge, as the basic
research and core technology development, has been preferentially
funded. This area takes up 32.82% with the number of 1346, whose
awarded amount is 36.47%. The information and communication
services got 691 funded projects with 14.25% of awarded amount.
In the perspective of services and manufacturing area with incomplete statistics, including advancement of knowledge (advancement of knowledge, engineering, information, computer and
205
2525
1650
3724
3390
2045
2494
1370
Awards from Different Directorates (2005-2015)
Office of the Director
Directorate for Mathemacal &
Physical Sciences
Directorate for Social, Behavioral &
Economic Sciences
Directorate for Computer &
Informaon Science & Engineering
Directorate for Geosciences
Directorate for Engineering
Directorate for Biological Sciences
Directorate for Educaon & Human
Resources
Fig. 1. Awards statistics from NSF organizations.
1 For interpretation of color in Fig. 2, the reader is referred to the web version of
this article.
578 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
communication, life sciences, medical and health sciences, social
sciences), commercial services, health, education and social services, information and communication services, manufacturing
processes and products, as well as transportation systems and services, the proportion of awards is 55.08% with the amount of
56.39%. It implies that, Big Data in SM-SCM is currently attracted
much attention from the Canadian government who has funded
plenty of projects which aim to deal with the challenges and carry
out some advanced research on technologies, concepts, and models
for supporting data-based service, manufacturing, and decisionmaking.
4.2. Europe
Currently, European Commission launched a Digital Agenda for
Europe – A Europe 2020 Initiative with the aims of making Big Data
work for Europe (Craglia et al., 2012). Using the Big Data, it is possible to, in the point view from European Commission: (1) transform Europe’s service industries by creating innovative
information products and services; (2) increase the productivity
of all sectors through advanced business intelligence; and (3) better address the challenges in our societies and increase efficiency in
public sector. To this end, Europe Union (EU) started several activities. On October 2013, European Commission launched the communication on the data-driven economy, which concentrated on
the digital economy, innovation, and services as crucial drivers
for economy growth (EC, 2013). The communication brings new
requirements of right framework conditions for Big Data and Cloud
Computing. Therefore, current research funding opportunities in
the field of Big Data are highlighted in the Information and Networking Days – H2020 Working Programme 2014–2015, which
connected Europe Facility in terms of Big Data research, innovation, and talk-up. From 10 June, 2013 to 05 July, 2014, there were
three main funding opportunities for carrying out the Big Data
research on service industry. They are ‘‘Best practices for ICT procurement based on standards in order to promote efficiency and
reduce lock-in” (10/06/2013), ‘‘Study on a ‘European data market’
and related services” (23/07/2013), and ‘‘Deployment of an EU
Open Data core platform: implementation of the pan-European
Open Data Portal and related services” (05/07/2014). Meanwhile,
NSF Europe has supported great myriad of projects related to the
Big Data in SM-SCM. Details could be found through the official
website on http://www.nsf.gov/od/iia/ise/europe/nsf-europe-ofc.
jsp. It has been identified that 22,850 projects related to Big Data
in SM-SCM from the European Commission database. Among them,
the amount for FP5, FP6, and FP7 are 2801, 2466, and 5787 respectively. It could be observed that, current years, European Commission continuously concentrates on the Big Data research and
practice in terms of general techniques, service industry, and applications or tools which are able to transform traditional industries
such as manufacturing, logistics, and energy.
Fig. 3 presents the projects sponsored from different European
countries whose have over 100 awards. It can be seen that three
countries – UK, Germany, and France have awarded more than
3000 projects related to Big Data respectively and UK is the top
in Europe given the amount of 4649. The three developed countries
takes up 51.09% of the total awarded projects, most of which focus
on common core technologies, finance, service, and logistics field.
In one of the latest projects collaborated by KPMG and Imperial
College London for example, over £20 M was invested to create a
KPMG center for advanced business analytics, aiming to put UK
at the forefront of data science and top services in terms of analysis
of business capital, growth opportunities, people, operations, and
resilience (Myers, 2014). Another example is the Europe’s biggest
Big Data community – Big Data London. Founded in 25 February
2011, there are 6 group sponsors and 3150 members that are
involved in working on Big Data projects, technologies, applications, education, and discussions. The two typical examples boost
UK’s standing of Big Data research on SM-SCM in Europe and even
world-wide which makes the service industry in the UK stands at
the top level.
Among the awarded projects, Table A2 lists the current research
topics in the area of information and communication technologies
(ICT) which are core area of the EU Seventh Framework Programme
(FP7) that covers a wide range of concerns in SM-SCM. The table
highlights the focused technologies from each project and technical features so that attentions could be underlined when implementing Big Data applications in different areas related to service
and industrial fields.
4.3. Asia Pacific
According to a current report from IDC, the Big Data technology
and services market in Asia will grow at a 34.1% compound annual
growth rate from $548.4 M in 2012 to $2.38 billion in 2017
(Jimenez et al., 2013). Conventional wisdom says: ‘‘Asia-Pacific lags
the United States in enterprise technology adoption, though Australia and New Zealand are regional exceptions. Japan, while
aggressive consumers of personal electronics, are conservative
and slow in the enterprise. Hong Kong and Singapore are often in
the middle of the pack. China is difficult to generalize.”
(Drummonds, 2013). That is because the adoption of Big Data
strategies by businesses in this region has been relatively poor,
even in some of the leading and more advanced companies.
$8.42
$169.98
$227.45
$286.79 $330.45
$326.13
$440.43
$548.57
$685.27
$0.00
$100.00
$200.00
$300.00
$400.00
$500.00
$600.00
$700.00
$800.00
$900.00
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Millions
Awarded Amount
21 71 200
1142 2556
5481
7732
9523
11946
0
2000
4000
6000
8000
10000
12000
14000
16000
2007 2008 2009 2010 2011 2012 2013 2014 2015
Awards from 2007 to 2015
(a) Awarded Projects (b) Awarded Amount
Fig. 2. Analysis of awards from a yearly view.
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 579
Take the two largest population countries India and China for
example, from January 2009 to December 2015 with incomplete
statistics, the total grants on Big Data in SM-SCM from main government research foundation are 146 and 346, which is much less
than the number from some countries in North America and Europe. Fig. 4 shows a recent statistics from National Natural Science
Foundation of China (NSFC) and Ministry of Science and Technology of the People’s Republic of China (MOST) who are two major
foundation bodies for the research.
It could be observed from Fig. 4, China, as the largest country in
this region, started its Big Data research on SM-SCM much later as
2014 is considered to be the initiative year that the MOST supported two basic Big Data projects in Big Data computing techniques over Internet and its applications. From the year 2010 to
2013, the Chinese government has tried to test a few primary
projects. However, after the very success of Big Data-driven
application from other regions, China realized that it was time
(2014) for carrying out the research and application of Big Data
in SM-SCM, otherwise it is impossible to catch up with the
developed countries like U.S. and U.K. That is why China currently
support rich projects, aiming to chasing up the steps. Other
countries or districts in this region like South Korea, Singapore,
Japan, and Australia, are leaders which have launched diverse
initiatives on Big Data and implemented numerous projects so as
to profoundly change the situations in this region (Kim, Trimi, &
Chung, 2014).
South Korea launched the Big Data initiative in 2011 by the
President’s Council on National ICT Strategies, aiming to converge
knowledge and administrative analytics in light of advanced Big
Data technologies (Kim et al., 2014). This Big Data initiative sets
up a set of goals: (1) to establish pan-government Big-Data-Net
work-and-Analysis-System; (2) to establish a public datadiagnosis system; (3) to train and generate talented professionals;
(4) to develop Big-Data infrastructure technologies; and (5) to
develop Big-Data management and analytical technologies. In
order to fulfill these goals, many South Korean ministries and agencies have carried out related actions. The healthcare service, for
example, the Ministry of Health and Welfare proposed the Social
Welfare Integrated Management Network to analyze 385 different
categories of public data from 35 agencies so as to provide welfare
benefits and services to deserving recipients. The Ministry of Public
Administration and Security planed a Preventing Disasters System
to predict accidents giving past damage records automatically and
real-timely. These projects significantly shifted the service conditions, which created the lead role in building Big Data infrastructure in this region.
Singapore government currently made lots of efforts on promoting Big Data. As early as 2004, the government launched the
Risk Assessment and Horizon Scanning (RAHS) program to address
national security, infectious diseases, and other national concerns
through analyzing large-scale datasets (Habegger, 2010). This program proactively manages national threats after successful implementation including financial crises, infectious diseases, terrorist
attacks, etc. Then, in 2007, an RAHS experimentation center was
established for designing and developing a Big Data infrastructure
and new technological tools to support policy making and service
decisions. Additionally, the government has supported plenty of
Big Data projects on the service industry in terms of data collection
techniques, analytics, and visualization tools which are supporting
Singapore in the areas of data integrity, service, and excellence
(Kankanhalli, Hahn, Tan, & Gao, 2016). Nowadays, the service
industry has become a major focus for the Singapore government,
aiming to create value through Big Data research, analysis, and
applications, by which a portal site (http://data.gov.sg/) was built
up for accessing public available data from more than 5000 datasets from 50 ministries and agencies. In Japan, the government
since from 2005, in association with universities and research
institutes, has initiated several programs to make full use of the
accumulated large-scale data. These programs are focusing on a
new IT infrastructure for the information-explosion era. Currently,
the government sets its top priority on handling the consequential
data of the Fukushima earthquake, tsunami, and nuclear-powerplant disaster, as well as related social and economic consequences
using Big Data techniques, cooperating with country’s NSF to
strengthen research and leverage these technologies for preventing, mitigating, forecasting, and managing natural disasters
(Miura, Komori, Matsumura, & Maeda, 2015). As a crucial mission
for 2020 Japan, ‘‘Big Data Applications” has been designated by
both branches of the Ministry of Internal Affairs and Communications as well as the Council of Information and Communications
and the ICT Strategy Committee, aiming to work out suitable technical solutions for making a better and intelligent Japan in service
and industry.
In 2012, the Australian government supported a United Nations
initiative which harnessed data from social networks, search
queries, blogs, SMS messages, and the Internet to assess population
well-being to improve the services provided by governmental
departments. To go further with improving the services for the
public, the Australian Government Information Management Office
(AGIMO) provides an open website http://data.gov.au/ for accessing government data through the Government 2.0 program, which
supports repository and search tools for government Big Data with
the purpose of saving time and resources by using automated tools
that let publicities search, analyze, and leverage enormous data
sets to facilitate different areas. In June 2013, the Department of
Finance and Deregulation, Australian Government Information
Management Office, Australian Government, issued a report
named The Australian Public Service Big Data Strategy which illustrated the focus of implementation plans in service and business
opportunities in step-by-step actions to improve understanding
by enhanced Big Data analytics capacity (AustralianGovernment,
2013). The strategy reveals that there will be a large number of
projects funded by the government to call upon industry, research,
and academic experts to profoundly improve the technical
advances in Big Data in the coming 5 years.
4.4. Discussions
From the above statistics and analysis, it could be observed
that North America and Europe are the leading countries which
Fig. 3. Awarded projects in European countries.
580 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
share similar goals that are to improve national security, to
improve public services, to improve the national society by
developing core and general technologies and their applications.
The main concerns of Big Data in SM-SCM are placed on interoperability, analytics ability, visualizations, and security. However,
in the Asia Pacific region, some developing countries like China
and India show a lack of competitiveness comparing with
developed countries such as Japan, South Korean, and Australia,
whose research and applications on Big Data are also lagging
compared with U.S. and U.K. Different countries have set their
priorities for using Big Data due to their own government’s
pressures and issues, for example, terrorism and healthcare are
concentrated in the U.S.; general core technologies and
architectures as well as logistics applications are focused by the
U.K.; national disasters are highlighted in Japan, and national
defense and best service in Singapore. China and India, are good
followers on Big Data, but are still in their infancy and only carry
basic technologies at this stage. It could be found that they are
following the trends currently in the U.S.
It is expected that, the leading countries on Big Data will
transfer their technologies and applicable insights to those
who are facing challenges on dealing with a large number of
data sets in the coming 3–5 years. The most potential Big Data
market will be in the Asia Pacific area, in where India will be
chosen for a bridge for packaging their technologies and solutions due to its solid software and hardware basics, low-cost
of workforce, and education and language advantages. These
technologies and solutions are mainly designed and developed
to improve the public services and key concerns from each government. Thus, global collaboration will be definitely enhanced
since the data are global in nature. Data could be used to solve
global challenges such as the Global Earth Observation System.
Therefore, big companies with featured Big Data techniques
such as IBM, Google, and Facebook will take actions on making
full use of their captured data to find out potential markets and
propose associated service products which are able to provide a
more intelligent, more convenient, and more comfortable living
planet for human beings.
5. Current challenges, opportunities and future perspectives
Doug Cutting, the founder of Hadoop, proposed his prognostication on Big Data that we’ll be able to, in the future, store and process more data. Enterprises will do best are those that will best
leverage Big Data. More things will get integrated, and the future
looks very much like a framework which is able to store, process,
visualize, and support for intelligent, advanced and timely
decision-making (Koetsier, 2014). It is imaged that future Big Data
analysts will know everything about what you did today and what
you will do tomorrow. Big Data, one day without doubt, will assist
to improve healthcare, facilitate crime detection, design better
goods, optimize traffic patterns, and finally build up a better planet. Despite the prognosis being excellent, there is still a long
way to run due to the challenges faced currently. The challenges,
opportunities, and future perspectives in SM-SCM Big Data have
been classified by the authors in terms of (i) data collection methods, (ii) data transmission, (iii) data storage, (iv) processing technologies for Big Data, (v) Big Data-enabled decision-making
models, and (vi) Big Data interpretation and applications.
5.1. Data collection methods
Data collection plays an essential role in the future that without
an efficient and effective approach for capturing the data, it is
impossible for carrying out data-based analytics and decisionmaking. Though there are a large number of data collections methods such as Auto-ID technologies, smart sensors, digital devices,
and Internet-based social systems, several challenges exist in the
service and manufacturing industry fields. Firstly, paper and
manual-based data collection approaches are widely used in SMSCM, especially in the developing countries such as Pearl River
Delta (PRD) in China. The data captured from the approaches are
prone to be incomplete, inaccurate, and untimely, thus, decisions
based on such data are usually unreasonable, unexecutable, and
unpresentable (Zhong, Dai, Qu, Hu, & Huang, 2013). Secondly,
diverse collectors such as sensors, and digital devices, have their
2010 2011 2012 2013 2014 2015
Awards 1 1 6 10 146 180
Amount 6.1 1.9 111.8 64.6 279 211
6.1 1.9
111.8
64.6
279
211
0
50
100
150
200
250
300
350
400
450
0
20
40
60
80
100
120
140
160
180
200
USD ( 10000)
Awards
Current Awards Related to Big Data from China
Awards Amount Expon. (Amount)
Year Awards Awarded Amount (USD) Funded Body
2010 1 61,370 NSFC
2011 1 19,380 NSFC
2012 6 1,118,361 NSFC
2013 10 646,792 NSFC
2014 146 23,260,505 NSFC
2014 2 3,835,354 MOST
2015 180 211,199,505 NSFC
Total 346 240,141,267
Fig. 4. Statistics of awards on big data in China.
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 581
own specific data formats which are commonly heterogeneous,
unstructured, and incompatible. Data integration becomes extremely difficult in some situations. For example in commercial service, when two similar companies try to merge their
transactions, it is challengeable due to the unstandardized data.
Thirdly, when very large amounts of data needs to be collected
simultaneously, the system will be trapped due to the signal collisions and limited central processor’s capacity. Considering the
online submission of income tax forms in the U.S., the scalability
and performance are major concerned issues when each individual
with an order of 100 KB size and 100 million people may carry out
the procedure within a period of time so as to meet the deadline.
These concerns are also even highly significant in finance service
and national security logistics industry.
In order to address the above challenges, opportunities could be
observed. As the development of data collection technologies such
as smart Auto-ID, Internet of Things (IoT)-enabled devices, more
advanced data collectors will be produced so that companies are
able to capture and collect their frontline data. These data collectors may integrate biological recognition technologies to differentiate various users, voice-controlled system to facilitate the data
collection operations, as well as machine learning and adaptive
mechanisms to allow the devices much smarter and easier to use
under different conditions. For the services and logistics industry,
mobile smart data collectors are more suitable. Thus, IoT technologies may be embedded into cell phones or other physical objects
with the functionalities for collecting data. For example, the box
equipped with a temperature sensor and radio frequency identification (RFID) tag could be placed into a smart container with an
active RFID reader which is able to identify each box, then, the container could be carried by a smart vehicle that is attached with IoTenabled devices and global positioning system (GPS). Wearable
devices with intelligence are possibly used in SM-SCM for various
data collection in the near future as the fast development of
cutting-edge techniques (Wang, 2015).
Data standardization is crucial, thus, standards for different
industries will be carried out. That opens up an opportunity for
standards parties, database vendors, and operating system vendors
to seize the golden chances to build up various set of standard data
models, enabling easier and faster information sharing. Service
industries such as banking, insurance, as well as pharmaceutical
and automotive industries are stringent areas for more effective
standard models/scheme to customer information gathering. Parallel collection models using the data standards will be a hot point
to deal with enormous data collection instantly. With the
assistance of advanced hardware design and software algorithms,
parallel data collection approach hopes to handle data size of a
terabyte (TB) in a second. That brings chances for IT companies
to figure out new parallel mechanisms or hardware devices for fast
and reliable data capture in the near future.
Future perspectives of the data collection methods in SM-SCM
will focus on smart devices such as:
Multi-functional ubiquitous devices: The devices combine ubiquitous technologies such as cloud computing, smart Auto-ID
technologies, and smart sensors like temperature, humanity,
and light for achieving multi-functions. They are suitable for
collecting data in special services, for example, delivery of treasurable objects which are temperature- and light-sensitive.
Smart robotic collectors: These collectors are designed for capturing data in extremely special conditions like super low or
high temperature, and harmful environments with nuclear radiations. Such smart robotic collectors use advanced technologies
which enable them to perform like a human that is able to capture the data and sense everything through listening, smelling,
watching, tasting, and touching.
Intelligent adaptive devices: Such devices are designed as the
most intelligent data collectors using the machine learning
and artificial intelligence technologies. They are wearable and
have learnable skills to build up a knowledge base, which will
guide them to perform in new cases for collecting the required
data. For example, in public transportation, for easing the disabled people, these devices could provide some services like
guiding the routine, carrying them down or up the stairs if there
are no lifts, and could collect some data such as the information
about the people by sensing their smart ID cards, and the time
and location where they want to go.
5.2. Data transmission
Data transmission in SM-SCM usually includes wired and wireless methods, both of which have the issues of unstable reliability,
specifically the wireless network when transferring large amounts
of data. Signals such as electrical voltages, radio-waves, microwaves, and infrared signals are easily affected by disturbances like
electromagnetic signals, metal reflection, and superposition of signals during the communication channels. Minor affects occurring
may bring a ‘‘Butterfly effect” under immense data transmission.
That means a tiny change of a signal can result in large differences
in a later state. Additionally, security issues are primarily concerned in most service industries such as banking, insurance and
healthcare. Though some data encryption algorithms, models,
and mechanisms are used for keeping the data confidential, attacks
can easily destroy, expose, modify, disable, steal, obtain unauthorized access, and use of the data by exploiting its vulnerability.
For example, hackers can steal a person’s credit card by reduplicating the data into a forged card. Another transmission issue is velocity that as most of people may have the experience in waiting for
loading a web page or uploading some data. The transmission
speed heavily relies on the bandwidth of a channel and protocol
that defines the data structure over the channel. The current
networks (e.g. Internet, 4G) cannot offer a Big Data transmission
service efficiently and effectively, thus, next generation of highspeed network is necessary.
There are a number of possible opportunities that are able to
address the above challenges in Big Data transmission. A
well-recognized system is the Global Environment for Network
Innovations (GENI), initiated by the U.S. NSF, aiming to enhance
the computer networking and distributed systems by accelerating
the data transfer and improving reliability. GENI creates a new
network architecture which enables different groups to share
available resources such as the physical link bandwidth, processing
resources, and other possible things. Another proper smart
network with smart devices could be worked out by a smart grid
architecture, which improves the capacity and flexibility.
Advanced technologies are utilized by the smart network for sensing and controlling the data transmission with more sophisticated
encryption strategies that are based on biological mechanisms
such as DNA which is unique and difficult to decrypt without an
authorized key. Under such a smart network, new protocols will
be adopted that the transferred data could be optimal packaged
so that high-speed data transmission and exchange could be
achieved. To this end, necessary technologies will focus on cloudbased mobiles, 5Cs devices (clever, connected, converged, cooler
and cheaper), smart nodes network with powerful processing
ability, and IoT techniques.
The future of the Big Data transmission in SM-SCM considered
by this paper is as follows:
Smart powerful network: This network mainly uses the wireless
communication standards based on powerful smart devices like
switches, hubs, and routers, which are able to process the data
582 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
transferring with high reliability as each smart nodes can ‘‘talk”
to each other and work cooperatively. Meanwhile, the smart
cache ensures the high quality of a large number of data sharing
sensors over the network.
Advanced encryption methods: Such methods use complex
characteristics such as biological authentication and chaotic
encryption scheme to enhance the security of data over the
smart network. Smart nodes with powerful processing abilities
can execute these approaches in distributed mode so that the
data will safely pass across the network. Another encryption
approach is an intelligent secure processing chip with keystream cache which could be embedded into our rings or necklace. This chip is an active smart device that is able to recognize
its owner by sensing our biological features.
Synchronization of central and distributed network architecture: Such architecture is a bionic system which performs like
a human neural principle that a super computer acts as our
brain and each node works as the neural cell. The ‘‘brain” centrally controls the whole network in terms of behaviors and
operations of each nodes, while, the distributed nodes are able
to interact and sense with each other reflecting the current situations to the ‘‘brain” real-time.
5.3. Data storage
A major issue of Big Data storage is the limitation of media
which is the hardware to carry the large amount of physical data.
Current tools are not capable of processing the storage operations
in seconds when facing a great myriad of data. How to store very
large quantities of unstructured or heterogonous data from various
data collectors in service, manufacturing, logistics, and other fields
is really challengeable. Another issue is the sets for holding super
huge volume data which is difficult to query, insert, update, and
delete because such operations are time costing when scanning
the whole data sets. How to efficiently keep the captured data in
a set is difficult using current database or data warehouse
technologies.
Cloud-based storage and smart storage mechanism may be two
possible solutions to address the above challenges. Cloud-based
storage is able to provide virtually unlimited storage and flexible
access to the data so as to carry out various applications and services via the Internet (Sawhney, Puri, & Van Rietschote, 2013).
Within this storage, cloud technology can use implicit security in
online mode, an identify-based encryption (IBE) for authentication,
and a web service API to customize the usage (Spoorthy, Mamatha,
& Kumar, 2014). According to the smart storage from IBM, the
mechanism is based on an infrastructure with instrumented,
interconnected, and intelligent storage devices (Hill, 2012). Three
basic tenets are included by the smart storage mechanism. They
are efficient by design, self-optimizing, and cloud agile with the
advantages of data centric, storage and data management
confluence, and storage information infrastructure-oriented (Gim,
Hwang, Won, & Kant, 2015). Such storage includes selfconfiguration, self-protection, self-optimization, self-healing, and
self-management software intelligence so that huge data sets
could be handled sufficiently.
Future data storage will be carried out in two dimensions
related to the media and mechanism:
Biological storage media: In the future, the magnetic and flash
memory materials will be replaced by biological media that is
able to keep the data by dynamic, episodic, and semantic
modes. This media deals with the incremental data storage with
proliferation of smart cells which are the key components in an
intelligent infrastructure, thus unlimited capacity and fast input
& output could be achieved.
Learnable storage mechanism: Based on the biological storage
media, a learnable storage mechanism is able to simulate the
ability of human to learn facts and relationships. Two main
learning categories are included. First is the instrumental learning which is based on the storage behaviors from various smart
cells. The other is motor learning that refines patterns according
to the operations of smart cell practicing.
5.4. Processing technologies for Big Data
It is estimated that roughly 635 years are needed to process 1 K
Petabytes, assuming a processor expends 100 instructions on one
block at 5 G (Kaisler et al., 2013). Currently, the processing of Big
Data such as a query, is very time consuming since the speed of
traversing all the data in a set like a data warehouse is based on
index searching mechanism. The indices are not sufficient when
facing more complicated and larger data volume (Ji, Li, Qiu,
Awada et al., 2012; Ji, Li, Qiu, Jin et al., 2012). The velocity of computation and analysis on Big Data is a key issue which could not be
solved only by depending on more powerful super computers or by
adding additional computational resources. The traditional serial
algorithms, models, and mechanisms are inefficient for Big Data.
Thus, new data parallelism and advanced approaches are desirable
in data processing such as cleansing, compressing, and classification. Analysis heavily depends on the effectiveness and accuracy
by processing the data. When the data volume and complexity
increasing, large-scale analysis is prone to be of low accuracy
because computational costs (time) and precision are always conflictive. In order to get the analyzed results, accuracy is always sacrificed, therefore, the decisions based on the analysis could be
inaccurate.
For enhancing the processing ability on Big Data, possible
opportunities could be carried out in parallel processing technologies, cloud-based solutions, and advanced models/algorithms. For
parallel processing technologies, the form of parallel computation
where a large number of calculations could be implemented simultaneously will be divided into several smaller ones. The divided
smaller calculation will be executed in a smart grid which is based
on a hierarchical network where each node is equipped with highperformance and powerful capacity unit for executing task parallelism. For cloud-based solutions, the data processing models and
algorithms are designed and developed as services which are kept
in a cloud. For different applications, users are able to access the
suitable services, for example, data compression services can deal
with different types of data for reducing the data volume by fully
using the parallel processing or cloud-computing technologies.
For advanced models/algorithms, concurrent programming languages (CPL), automatic parallelization, and improved intelligent
methods will play important roles to work out new innovative
approaches to improve the processing speed, accuracy, and
adaptability.
Future of the Big Data processing technology will be concentrated on the following aspects:
Smart cloud-based infrastructure: This is the foundation of
organizing all the processing technologies or services efficiently
and effectively. It will be installed in a cloud by simulating the
human system characteristics that different functionalities will
be optimally designed as smart services for fulfilling the purposes. Within the infrastructure, different smart services can
highly collaborate with each other with parallelism and simultaneousness mode.
Intelligent processor: In a smart cloud-based infrastructure,
each node is equipped with an intelligent processor which is
reconfigurable so as to achieve different processing modes to
face different processing requirements. The processor contains
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 583
several powerful biological chips which can perform the calculation individually and collaboratively. Thus, the processing
speed could be accelerated, meanwhile, some intelligent processors can examine the processing results to improve the
accuracy.
5.5. Big Data-enabled decision-making models
More and more decision-makings in SM-SCM require associated
information and knowledge which could be excavated from Big
Data. Unfortunately, 55% of respondents stated, from a recently
survey, that they feel Big Data decision models are not viewed
strategically at senior levels of their organizations (Jin et al.,
2015). Many traditional data mining approaches are packaged or
upgraded to be termed as Big Data based decision-making models.
In these cases, such approaches are not able to deal with the Big
Data challenges in specific applications such as SM-SCM. Several
challenges should be addressed. First of all, decision-making models may need data for working out resolutions for various purposes
such as optimization in operational, strategical, and analytical
levels. However, facing large number of data, decision-models
need a long computational time. Secondly, the data driven decision
models do not have evaluation criteria to examine the efficiency
and effectiveness. Current comparisons with other models or solutions may not be suitable in Big Data-driven cases. Thirdly, a
decision-making model always focuses on a specific problem.
Generic models are scarcely reported for solving multi-objective
problems. With the help from Big Data, multi-functional models
could be achieved.
New models and decision-making mechanisms are possible
opportunities to address the challenges. These new models, on
one hand, can make full use of the Big Data set through mining various information and knowledge which could be converted into
parameters for optimal decision-makings; on the other hand, can
work quickly to get the optimal results. Such models could be used
not only in specific problem solving, but also multi-objective problems by using different Big Data-driven mechanisms. With the historic data built into evaluation criteria, models can be then
quantitatively verified.
Future Big Data-driven decision-making models will be implemented in the two directions:
Self-learning models: In the future, decision-making models are
capable of learning from massive Big Data for improving themselves. Deep machine learning (DML) will be embedded into
decision-making models so that they can be equipped with continuous learning ability. Such ability could learn invaluable
knowledge from different types of Big Data so that these models
could be extended and fragmentized into new models.
Smart decision-maker: Future decision-making models are not
working independently. A decision-making model could invite
associated models for collaboratively work through hierarchical
or parallel fashion. With the intelligent learning ability, a set of
decision-making models are able to form to be a smart decisionmaker to (1) pick up precise data as parameter; (2) figure out
resolutions quickly; and (3) evaluate the results sufficiently.
5.6. Big Data interpretation and applications
‘‘Big Data” is really a global problem if we cannot interpret or
use it. Many companies boast that they are collecting large amount
of data from varieties of sources with its volume is ever increasing.
When making effort to achieve data-driven decision-making, they
are blind to follow that data without interpretation approaches.
Data visualization is a perfect way to cure the blindness. It converts
the raw data into meaningful information through visual representation such as information graphics, scientific visualization, and
statistics graphics. Currently, Big Data visualization is laggingly
due to the weakness of corresponding data analytical methods
and display mechanisms. In February 2014, a successful case
demonstrated that, in order to communicate a story at the University of Waterloo Stratford Campus Inspiration Day, how visualization techniques increase the understanding of Big Data sets to
know everything (Mason, 2014). That highly lifts up the academic
and industry field for working on the Big Data visualization, which
plays a critical role in helping the decision-makers to facilitate
their operations and procedures. A good visualization should deal
with the question on what data carrying core values should be
picked from the Big Data sets. This question is a major challenge
due to the diversified data format, complex data relationships,
and complicated data computations.
Big Data is highly topical popular at this moment and everyone
is talking about this term but actions and cases are scarce. Some
typical applications could be observed where Big Data makes them
a real difference (Hazen, Boone, Ezell, & Jones-Farmer, 2014). In
online shopping over the Internet, it is used for better understanding the behaviors and preferences of the customers so that predictive models could find out the potential market and profit margin
when companies expand their Big Data set with social media data,
browser logs, as well as historic behavioral data. Financial areas
with high-frequency trading have real Big Data with many uses
today. Decision models based on the information and trends from
past Big Data sets are figured out for carrying out biding, trading,
investing, and other associated decisions. Advanced science and
technology-based services such as aviation, scientific research,
and geographical information system mostly use the Big Date to
improve the performance and make advanced decisions. 10 categorized Big Data application areas are discussed by Bernard Marr
who, in his point of view, these areas represent the most applied
fields widespread (Marr, 2013).
New opportunities for the Big Data interpretation and application could be carried out from several viewpoints. First of all, for
easing Big Data visualization, cloud-based components could be
designed and developed. The components are based on the dragand-drop mode for displaying various standardized and formatted
data API through various charts, figures, and curves. Moreover,
data/information evaluation mechanisms or systems should be
established for ensuring the core value from the Big Data sets. This
could be based on smart hardware and software technologies,
which use intelligent algorithms and models, advanced sensors,
and smart control mechanisms so that different cases are able to
use them through cloud-based applications. The evaluation systems have the ability of self-learning from different cases and
can work out a possible resolution when facing a similar situation.
Furthermore, the Big Data application will be extended into more
and more areas which are closely related to our daily life. As the
wide use of cell phones, Big Data App over the smart phones will
be exploited as the powerful biological devices become smaller
and smarter. Finally, Big Data market leaders are those who can
leverage their unique and invaluable knowledge, smart property,
and core technologies to produce various products and services
which are able to better understand/interpret the data, predict outcomes, and continuously learn from the vast collected data. For
supporting them, research institutions, governments, and endusers could get involved in a role of seizing the golden chance to
make their contributions.
Looking at the future, multi-dimensional visualization tools are
needed to show the real-time status of data, changes, and trends of
different smart things, which are carrying smart identification
devices. Immense smart things create a smart city, within where
enormous data is generated and could be made full use of to support the running of a city. In a smart city, people’s life will be
584 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
mostly facilitated so that daily life, health, education, and so on
could be looked after well given the Big Data from your daily
behavior and expectations. A smart planet with smart services system, smart logistics and manufacturing supply chain management,
smart network, and smart control principle is the goal for the
future.
6. Conclusion
The increasing volume of data from service and manufacturing
supply chain management (SM-SCM) is a challenge which requires
tools to make full use of the data. Big Data has emerged as a
discipline that is able to provide the possible solutions for data
analysis, knowledge extraction, and advanced decision-makings.
Significant application of Big Data on service only focuses on
finance, information technologies, and advanced-technology fields.
This paper outlined the illustrative Big Data applications in
SM-SCM, Big Data technologies, and current movement
world-wide. Current challenges, opportunities, and future perspectives from six concerned aspects are highlighted.
Though the ideas discussed in this paper are based on limited
knowledge and data statistics, concentrations on SM-SCM could
be observed for expanding the possible ideas to many other
domains. Academia and industrial practitioners are able to inspire
their ideas for enriching this area so that transformation and
upgrading of SM-SCM could be driven by Big Data.
Acknowledgement
This work is supported by National Natural Science Foundation
of China (Grant Nos. 51405307, 61473093, and 61540030) and
Project Funded by China Postdoctoral Science Foundation
(2015M570720).
Appendix A
Tables A1 and A2
Table A1
Big data awards in different areas from NSERC (2010–2014).
Area of application (detailed) Number Amount Average award ($)
N% $ %
Advancement of knowledge
Advancement of knowledge 285 6.95% 14,069,899 8.52% 49,368
Agriculture 29 0.71% 1,329,953 0.81% 45,860
Earth sciences 115 2.80% 4,360,925 2.64% 37,921
Engineering 99 2.41% 3,728,512 2.26% 37,662
Information, computer and communication technologies 203 4.95% 9,468,344 5.74% 46,642
Life sciences (including biotechnology) 179 4.36% 7,923,011 4.80% 44,263
Materials sciences 55 1.34% 3,264,165 1.98% 59,348
Mathematical sciences 113 2.76% 2,467,823 1.49% 21,839
Medical and health sciences 94 2.29% 3,279,847 1.99% 34,892
Physical sciences 75 1.83% 6,236,445 3.78% 83,153
Psychology 50 1.22% 1,896,341 1.15% 37,927
Social sciences 5 0.12% 116,000 0.07% 23,200
Space and astronomy 44 1.07% 2,066,987 1.25% 46,977
Subtotal: 1346 32.82% 60,208,252 36.47% 44,731
Agriculture and primary food production
Agriculture and primary food production 24 0.59% 875,172 0.53% 36,466
Animal management (animal diseases, breeding) 19 0.46% 784,675 0.48% 41,299
Animal production and animal primary products 15 0.37% 382,345 0.23% 25,490
Aquaculture 11 0.27% 848,071 0.51% 77,097
Crop management (pest, disease control and breeding) 2 0.05% 50,000 0.03% 25,000
Farming: soil and water resources 3 0.07% 71,100 0.04% 23,700
Plant production and plant primary products 8 0.20% 282,123 0.17% 35,265
Subtotal: 82 2.00% 3,293,486 2.00% 40,164
Commercial services
Commercial services 7 0.17% 164,241 0.10% 23,463
Sanitary engineering 4 0.10% 186,954 0.11% 46,739
Waste, waste management and recycling 10 0.24% 376,014 0.23% 37,601
Subtotal: 21 0.51% 727,209 0.44% 34,629
Construction, urban and rural planning
Construction methods 17 0.41% 744,117 0.45% 43,772
Construction, urban and rural planning 16 0.39% 896,240 0.54% 56,015
Materials performance 24 0.59% 1,016,140 0.62% 42,339
Structural engineering 34 0.83% 1,635,127 0.99% 48,092
Surveying and photogrammetry 14 0.34% 471,519 0.29% 33,680
Subtotal: 105 2.56% 4,763,143 2.89% 45,363
Energy resources
Alternative energy resources 31 0.76% 1,157,622 0.70% 37,343
Electrical energy 21 0.51% 1,403,164 0.85% 66,817
Energy efficiency 38 0.93% 1,719,284 1.04% 45,244
Energy resources (including production, exploration, processing, distribution and use) 57 1.39% 2,537,718 1.54% 44,521
(continued on next page)
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 585
Table A1 (continued)
Area of application (detailed) Number Amount Average award ($)
N% $ %
Energy storage and conversion 23 0.56% 1,159,076 0.70% 50,395
Nuclear energy 17 0.41% 935,003 0.57% 55,000
Oil, gas and coal 22 0.54% 1,076,357 0.65% 48,925
Subtotal: 209 5.10% 9,988,224 6.05% 47,791
Environment
Climate and atmosphere 40 0.98% 1,578,583 0.96% 39,465
Conservation and preservation 23 0.56% 1,113,264 0.67% 48,403
Environment 124 3.02% 5,233,573 3.17% 42,206
Environmental impact of economic activities (including agriculture) 20 0.49% 881,458 0.53% 44,073
Inland waters 18 0.44% 795,388 0.48% 44,188
Land, solid earth, seabeds and ocean floors 5 0.12% 395,920 0.24% 79,184
Modelling and mathematical simulation of natural processes 53 1.29% 2,040,593 1.24% 38,502
Oceans, seas and estuaries 44 1.07% 2,002,252 1.21% 45,506
Pollutants and toxic agents (waste, use 902) 6 0.15% 123,382 0.07% 20,564
Wildlife management 14 0.34% 523,383 0.32% 37,385
Subtotal: 347 8.46% 14,687,796 8.90% 42,328
Health, education and social services
Biomedical engineering 126 3.07% 4,821,105 2.92% 38,263
Health, education and social services 19 0.46% 891,425 0.54% 46,917
Human health (including medically-related psychological research) 47 1.15% 2,013,111 1.22% 42,832
Learning and education 9 0.22% 414,863 0.25% 46,096
Subtotal: 201 4.90% 8,140,504 4.93% 40,500
Information and communication services
Communication systems and services (planning, organization, services) 44 1.07% 1,548,694 0.94% 35,198
Communications technologies (satellites, radar, etc.) 75 1.83% 2,957,009 1.79% 39,427
Computer communications 57 1.39% 1,770,998 1.07% 31,070
Computer software 201 4.90% 6,217,937 3.77% 30,935
Information and communication services 50 1.22% 2,241,180 1.36% 44,824
Information systems and technology 264 6.44% 8,790,736 5.33% 33,298
Subtotal: 691 16.85% 23,526,554 14.25% 34,047
Manufacturing processes and products
Agricultural chemicals (fertilizers, herbicides, pesticides) 4 0.10% 153,784 0.09% 38,446
Ceramic, glass and industrial mineral products 1 0.02% 25,000 0.02% 25,000
Communications equipment 42 1.02% 2,349,482 1.42% 55,940
Consumer goods 4 0.10% 239,835 0.15% 59,959
Electrical and electronic machinery and equipment (including computer hardware) 48 1.17% 1,843,354 1.12% 38,403
Fabricated metal products 3 0.07% 138,000 0.08% 46,000
Fibres and textiles 5 0.12% 158,412 0.10% 31,682
Human pharmaceuticals 9 0.22% 239,000 0.14% 26,556
Industrial chemicals (solvents, resins) 2 0.05% 50,000 0.03% 25,000
Instrumentation technology 33 0.80% 1,623,173 0.98% 49,187
Manufacturing processes and products 71 1.73% 3,576,246 2.17% 50,370
Mechanical machinery, heavy equipment (including farm, forestry, and construction equipment) 11 0.27% 613,228 0.37% 55,748
Medical equipment and apparatus 30 0.73% 1,255,606 0.76% 41,854
Other manufactured products and processes 1 0.02% 200,000 0.12% 200,000
Polymers, rubber and plastics 10 0.24% 427,139 0.26% 42,714
Primary metal products (ferrous and non-ferrous) 5 0.12% 286,365 0.17% 57,273
Processed food products and beverages 9 0.22% 418,472 0.25% 46,497
Production and operations management 17 0.41% 453,000 0.27% 26,647
Transport equipment 4 0.10% 185,132 0.11% 46,283
Veterinary pharmaceuticals 2 0.05% 59,940 0.04% 29,970
Wood, wood products and paper 7 0.17% 459,020 0.28% 65,574
Subtotal: 318 7.75% 14,754,188 8.94% 46,397
Natural resources (economic aspects)
Commercial fisheries 5 0.12% 365,900 0.22% 73,180
Forestry (silviculture, forest management) 24 0.59% 1,164,624 0.71% 48,526
Mineral resources (prospecting, exploration, mining, extraction, processing) 45 1.10% 2,889,895 1.75% 64,220
Natural resources (economic aspects) 4 0.10% 128,167 0.08% 32,042
Oceans and inland waters 6 0.15% 143,821 0.09% 23,970
Subtotal: 84 2.05% 4,692,407 2.84% 55,862
Northern development
Construction, transportation and communications 14 0.34% 520,495 0.32% 37,178
Environment 41 1.00% 2,249,732 1.36% 54,872
Northern development 6 0.15% 512,759 0.31% 85,460
586 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
Table A2
Recent focus of technology on big data in SM-SCM from Europe.
Project Focused technology Aims Technical features Project duration Awarded
amount
(M€)
ACTIVE-Enabling the Knowledge
Powered Enterprise
ACTIVE Technology Increase the productivity of
knowledge workers through tools that
leverage hidden factual and
procedural knowledge
Open and scalable
Integrated and contextualized knowledge workspace
03/2008–02/
2011
8.2
ADVANCE – Advanced predictiveanalysis-based decisionsupport engine for logistics
Data-based decision engine Enabling strategic planning coupled
with instant decision making to
provide vision in a blizzard of data
Open source
Networked strategic
planning
Coupled with instant
decision making
10/2010–09/
2013
1.98
AXLE – Advanced Analytics for
EXtremely Large European
Databases
Open source implementation
of PostgreSQL and Orange
products
Improve the speed and quality of
decision making on real-world data
sets
Scalability Engineering-autopartitioning,
compression
Visual analytics
Advanced architectures for hardware
and software
11/2012–10/
2015
2.95
BIOBANKCLOUD – Scalable,
Secure Storage of Biobank
Data
A cloud-computing platform
as a service (PaaS) for
Biobanking
Have BiobankCloud remove the
Biobank bottleneck, enabling global
leadership for European Biobanks,
with improved support for preventing
diseases, spotting trends, and
advancing our understanding of
clinical and molecular pathology
Security, storage,
data-intensive tools
and algorithms
Allow Biobanks to
share data with one
another
12/2012–11/
2015
2.08
BIG – Big Data Public Private
Forum
Build an industrial
community around Big Data
in Europe
Establish a proper channel to gather
information and influence adequately
the decision takers by Big Data
Five technical group
related to Big Data
Five forums specific
in SM-SCM
09/2012–10/
2014
2.50
BIOPOOL – Services associated to
digitalised contents of tissues
in Biobanks across Europe
Build a system linking pools
of digital data managed by
biobanks
Enable biobanks to build a network
that links collections of histological
digital images of biologic material and
associated information managed by
biobanks
Text and image
based search queries
Region-of-interest
extraction
Automated pathology information
extraction for specific types of cancers
09/2012–08/
2014
1.966
COMSODE – Components
Supporting the Open Data
Exploitation
Create a publication
platform called Open Data
Node
Progress the capabilities in the Open
Data re-use field
Powerful scalable
framework
New open data
applications
10/2013–09/
2015
1.469
Cubist – Combining and Uniting
Business Intelligence and
Semantic Technologies
Combination of semantic
technologies, BI and visual
analytics
Leverages Business Intelligence to a
new level of precise, meaningful and
user-friendly analytics of data
Support of unstructured and structured
data sources
Hybrid BI enabled
triple store
10/2010–09/
2013
3.029
(continued on next page)
Table A1 (continued)
Area of application (detailed) Number Amount Average award ($)
N% $ %
Subtotal: 61 1.49% 3,282,986 1.99% 53,819
Not available
Not available 473 11.53% 9,670,630 5.86% 20,445
Subtotal: 473 11.53% 9,670,630 5.86% 20,445
Transportation systems and services
Aerospace 58 1.41% 2,491,523 1.51% 42,957
Ground (road and rail) 5 0.12% 112,980 0.07% 22,596
Transportation systems and services 56 1.37% 2,306,869 1.40% 41,194
Water 44 1.07% 2,431,744 1.47% 55,267
Subtotal: 163 3.97% 7,343,116 4.45% 45,050
Total 4101 100.00% 165,078,495 100.00% 40,253
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 587
Table A2 (continued)
Project Focused technology Aims Technical features Project duration Awarded
amount
(M€)
DaPaaS – Data Publishing
through the Cloud: A Dataand Platform-as-a-Service
Approach for Efficient Data
Publication and Consumption
Combining Data-as-aService (DaaS) and Platformas-a-Service (PaaS) for open
data
Optimize publication of Open Data and
development of data applications
An open DaaS and
PaaS
Unified lined data
access
Integrated DaaS and
PaaS for open data
11/2013–10/
2015
1.499
Dicode – Mastering DataIntensive Collaboration and
Decision Making
Exploit a cloud
infrastructure to augment
collaboration and decision
making in data-intensive
and cognitively-complex
settings
Facilitate and augment collaboration
and decision making in data-intensive
and cognitively-complex settings
Scalable data mining
services
Collaborative support services
Data-driven decision
making support
services
09/2010–08/
2013
2.599
DOPA- Data Supply Chains for
Pools, Services and Analytics
in Economics and Finance
Data Supply Chains
information services and
visualization tools
Enable European economic actors and
researchers to participate in this
development of Data Supply Chain on
a distributed platform
Large scale and high
quality information
sourcing
Automated information processing
Automated entity
linkage
05/2012–04/
2014
1.996
EUROFIT – Integration,
Homogenisation and
Extension of the Scope of
Anthropometric Data Stored
in Large EU Pools
An online platform and an
open framework
Explore the potential contained in 3D
scan repositories to its use by
European Industries
Homologous 3D
models
Stand-alone tools for
shape analysis
06/2012–05/
2014
1.71
FIRST – Large scale information
extraction and integration
infrastructure for supporting
financial decision making
An information extraction,
information integration and
decision making
infrastructure
Provide a large-scale information
extraction and integration
infrastructure supporting non-ICT
skilled end users for on-demand
financial information access and
execution of financial market analyses
Information extraction from unreliable
semi-structured
sources on a massive
scale and in near
real-time
Automatic reuse of
existing ontologies,
large-scale ontology
learning
Advanced decision
models making use
of high-level semantic features
10/2010–09/
2013
3.025
GAPFILLER – GNSS DAta Pool for
PerFormances PredIction and
SimuLation of New
AppLications for DevelopERs
A sustainable data pool of
worldwide GNSS
measurements
Fill the gap between big
manufacturers and SMEs by providing
the researchers and developers’
community with a unique extensible
data pool enabling performances
prediction and simulation of new
Global Navigation Satellite System
(GNSS) based applications and
algorithms
A common GNSS
database for performance assessment
EGNOS aeronautical
integrity concept
05/2012–04/
2014
1.126
FUSEPOOL – Fusing and pooling
information for product/
service development and
research
An user-adaptive living
knowledge pool
Provides the automated
transformation of content from webharvesting and participating
organizations into structured Linked
Open Data format and the automated
group-specific optimization of
knowledge finding and matching
based on transfer learning from
individual users
User-case specific
autocomplete search
Graph browsing to
explore related and
new information
Landscape view for
visual analysis of
large result sets
07/2012–06/
2014
1.93
ONTORULE – Ontologies meet
business rules
The ONTORULE technology Lift the knowledge relevant to
business rules in an organization from
the IT level to the business level, allow
management of this knowledge by the
business professional, and make this
knowledge available to the software
applications in the organization
High-quality linked
statistical data
Advanced data analytics and
visualizations
11/2013–10/
2015
0.985
OpenCube – Publishing and
Enriching Linked Open
Statistical Data for the
Development of Data
Analytics and Enhanced
Visualization Services
OpenCube technology Facilitate (a) publishing of highquality linked statistical data, and (b)
reusing distributed linked statistical
datasets to perform advanced data
analytics and visualizations
Open source software tools
Distributed linked
statistical datasets
11/2013–10/
2015
0.985
588 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
References
Adshead, A. (2013). Big data storage: Defining big data and the type of storage it needs:
TechTarget PODCAST. <http://www.computerweekly.com/podcast/Big-datastorage-Defining-big-data-and-the-type-of-storage-it-needs>.
Ahuja, S. P., & Moore, B. (2013). State of big data analysis in the cloud. Network and
Communication Technologies, 2(1), 62.
Alfalla-Luque, R., Marin-Garcia, J. A., & Medina-Lopez, C. (2014). An analysis of the
direct and mediated effects of employee commitment and supply chain
integration on organisational performance. International Journal of Production
Economics, 162, 242–257.
Angus, D., Rintel, S., & Wiles, J. (2013). Making sense of big text: a visual-first
approach for analysing text data using Leximancer and Discursis. International
Journal of Social Research Methodology, 16(3), 261–267.
Artigues, A., Cucchietti, F. M., Montes, C. T., Vicente, D., Calmet, H., Marin, G., …
Vazquez, M. (2015). Scientific big data visualization: A coupled tools approach.
Supercomputing Frontiers and Innovations, 1(3), 4–18.
AustralianGovernment (2013). The Australian public service big data strategy
(pp. 1–24).
Bechini, A., Marcelloni, F., & Segatori, A. (2016). A MapReduce solution for
associative classification of big data. Information Sciences, 332, 33–55.
Bell, L. (2014). IDF: Intel announces A-Wear to push big data apps via Internet of Things
: . The INQUIRER<http://www.theinquirer.net/inquirer/news/2364331/idfintel-announces-a-wear-to-push-big-data-apps-via-internet-of-things>.
Bhatti, R., LaSalle, R., Bird, R., Grance, T., & Bertino, E. (2012). Emerging trends
around big data analytics and security: Panel. In Paper presented at the
proceedings of the 17th ACM symposium on access control models and
technologies (pp. 67–68).
Biesdorf, S., Court, D., & Willmott, P. (2013). Big data: What’s your plan?: Insights &
Publications<http://www.mckinsey.com/insights/business_technology/
big_data_whats_your_plan>.
BigData-Startups (2013). Rolls Royce shifts in higher gear with big data : BigDataStartup. The Online Big Data Knowledge Platform<http://www.bigdatastartups.com/BigData-startup/rolls-royce-shifts-higher-gear-big-data/>.
Bloomberg (2013). UPS spends $1 billion a year on big data: For what? : Bloomberg
TV<http://www.bloomberg.com/video/how-ups-saves-cash-through-technologybig-data-uKw4qK3sRJW7iJkzH~ZhPw.html>.
Capron, E. (2013). How big data has delivered for FedEx for 25 years : SAP Business
Innovation<http://blogs.sap.com/innovation/big-data/big-data-deliveredfedex-25-years-0887095>.
Castiglione, A., Gribaudo, M., Iacono, M., & Palmieri, F. (2014). Exploiting mean field
analysis to model performances of big data architectures. Future Generation
Computer Systems, 37, 203–211.
Chae, B. K. (2015). Insights from hashtag# supplychain and Twitter analytics:
Considering Twitter and Twitter data for supply chain practice and research.
International Journal of Production Economics, 165, 247–259.
Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business intelligence and analytics:
from big data to big impact. MIS Quarterly, 36(4), 1166–1189.
Chen, H. C., Compton, S., & Hsiao, O. (2013). Diabeticlink: A health big data system for
patient empowerment and personalized healthcare. Smart health (pp. 71–83).
Berlin Heidelberg: Springer.
Chen, M., Mao, S., Zhang, Y., & Leung, V. C. (2014). Big data storage. Big data: Related
technologies, challenges, future prospects (pp. 33–49). Cham Heidelberg New
York Dordrecht London: Springer.
Choo, J., & Park, H. (2013). Customizing computational methods for visual analytics
with big data. Computer Graphics and Applications, IEEE, 33(4), 22–28.
Craglia, M., de Bie, K., Jackson, D., Pesaresi, M., Remetey-Fülöpp, G., Wang, C., …
Ehlers, M. (2012). Digital Earth 2020: Towards the vision for the next decade.
International Journal of Digital Earth, 5(1), 4–21.
Crochet-Damais, A. (2013). La Poste: pour ne plus perdre de courrier. Journal Du
Net<http://www.journaldunet.com/solutions/dsi/projet-de-big-data-en-france/
la-poste-pour-ne-plus-perdre-de-courrier.shtml>.
CSCMP (2014). Big data in the supply chain<http://cscmp.org/resources-research/
big-data-supply-chain>.
Currie, J. (2013). ‘‘Big Data” versus ‘‘Big Brother”: on the appropriate use of largescale data collections in pediatrics. Pediatrics, 131(Supplement 2), S127–S132.
Dekker, R., Pinçe, Ç., Zuidwijk, R., & Jalil, M. N. (2013). On the use of installed base
information for spare parts logistics: a review of ideas and industry practice.
International Journal of Production Economics, 143(2), 536–545.
DeLyser, D., & Sui, D. (2013). Crossing the qualitative-quantitative divide II
Inventive approaches to big data, mobile methods, and rhythmanalysis.
Progress in Human Geography, 37(2), 293–305.
Dimakopoulou, A. G., Pramatari, K. C., & Tsekrekos, A. E. (2014). Applying real
options to IT investment evaluation: The case of radio frequency identification
Table A2 (continued)
Project Focused technology Aims Technical features Project duration Awarded
amount
(M€)
OpenDataMonitor – Monitoring,
Analysis and Visualization of
Open Data Catalogues, Hubs
and Repositories
Open source software like
CKAN and its extension
Gain an overview of available open
data resources and undertake analysis
and visualization of existing data
catalogues using innovative
technologies
Scalable analytical
and visualization
methods
High extensible
framework
11/2013–10/
2015
1.496
SEMAGROW – Data intensive
techniques to boost the realtime performance of global
agricultural data
infrastructures
Data intensive techniques Develop the scalable, efficient, and
robust data services needed to take full
advantage of the data-intensive and
inter-disciplinary Science of 2020 and
to re-shape the way that data analysis
techniques are applied to the
heterogeneous data cloud
Heterogeneous data
collections & Steams
Reactive data
analysis
Reactive resource
discovery
11/2012–10/
2015
2.47
TRIDEC – A Collaborative,
Complex and Critical
Decision-Support in Evolving
Crises
Real-time intelligent
information management
Design and implementation of a
robust and scalable service
infrastructure supporting the
integration and utilization of existing
resources with accelerated generation
of large volume of data
Collaborative computing techniques
Rapid and ondemand applications
and tools
09/2010–08/
2013
6.79
VISCERAL – VISual Concept
Extraction challenge in
RAdioLogy
Information extraction and
retrieval involving medical
image data and associated
text
Define and execute a targeted
benchmark framework to speed up
progress towards Automated anatomy
identification and pathology
identification in 3D (MRI, CT) and 4D
(MRI with a time component)
radiology images
A cloud
infrastructure
Automated benchmark entries
11/2012–04/
2015
1.42
ViSTA-TV- Video Stream
Analytics for Viewers in the
TV Industry
A high-quality linked open
dataset (LOD) describing live
TV programming
Gather consumers’ anonymized
viewing behavior and the actual video
streams from broadcasters/IPTVtransmitters to combine them with
enhanced electronic program guide
information as the input for a holistic
live-stream data mining analysis: the
basis for an SME-driven market-place
for TV viewing-behavior information
Data mining for tagging, recommendations, and behavioral
analyzes
Temporal/probabilistic RDF-triple stream
processing
06/2012–05/
2014
1.995
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 589
(RFID) technology in the supply chain. International Journal of Production
Economics, 156, 191–207.
Dittrich, J., & Quiané-Ruiz, J.-A. (2012). Efficient big data processing in Hadoop
MapReduce. Proceedings of the VLDB Endowment, 5(12), 2014–2015.
Drummonds, S. (2013). Big data in Asia Pacific : Informationincognita<http://www.
infoincog.com/big-data-in-asia-pacific>/.
Dutta, D., & Bose, I. (2015). Managing a big data project: The case of Ramco cements
limited. International Journal of Production Economics, 165, 293–306.
Eatontown, N. J. (2012). Altior’s AltraSTAR – Hadoop storage accelerator and optimizer
now certified on CDH4 (Cloudera’s distribution including apache Hadoop version 4):
PRNewswire<http://www.prnewswire.com/news-releases/altiors-altrastar—
hadoop-storage-accelerator-and-optimizer-now-certified-on-cdh4-clouderasdistribution-including-apache-hadoop-version-4-183906141.html>.
EC (2013). Communication on data-driven economy<https://ec.europa.eu/digitalagenda/en/news/communication-data-driven-economy>.
Eichengreen, B., & Gupta, P. (2013). The two waves of service-sector growth. Oxford
Economic Papers, 65(1), 96–123.
Fang, B., & Zhang, P. (2016). Big data in finance. Big data concepts, theories, and
applications (pp. 391–412). Switzerland: Springer International Publishing.
Fawcett, S. E., & Waller, M. A. (2013). Considering supply chain management’s
professional identity: The beautiful discipline (Or, ‘‘We Don’t Cure Cancer, But
We Do Make a Big Difference”). Journal of Business Logistics, 34(3), 183–188.
Feldman, B., Martin, E. M. & Skotnes, T. (2012). Big data in Healthcare Hype and
Hope. Dr. Bonnie 360-Business Development for Digital Health, 1–56.
Feldman, D., Schmidt, M., & Sohler, C. (2013). Turning big data into tiny data:
Constant-size coresets for k-means, pca and projective clustering. In Proceedings
of the twenty-fourth annual ACM-SIAM symposium on discrete algorithms
(pp. 1434–1453). SIAM.
Ferdows, K., & De Meyer, A. (1990). Lasting improvements in manufacturing
performance: In search of a new theory. Journal of Operations Management, 9(2),
168–184.
Fukase, A. (2013). Japan post prepares for IPO. Wall Street J.<http://online.wsj.com/
news/articles/SB10001424052702304709904579406650519315292>.
Garel-Jones, P. (2011). How the financial services sector uses big data
analytics to predict client behaviour : IT for Financial Services<http://www.
computerweekly.com/feature/How-the-financial-services-sector-uses-big-dataanalytics-to-predict-client-behaviour>.
GE (2014). The rise of industrial big data : GE Intelligent Platforms<http://www.
ge-ip.com/library/detail/13170>.
Gianchandani, E. (2012). Obama administration unveils $200M big data R&D initiative
: The Computing Community Consortium Blog<http://www.cccblog.org/2012/
03/29/obama-administration-unveils-200m-big-data-rd-initiative/>.
GilPress (2012). A very short history of big data<http://whatsthebigdata.com/2012/
06/06/a-very-short-history-of-big-data/>. June 6, 2012.
Gim, J., Hwang, T., Won, Y., & Kant, K. (2015). SmartCon: Smart Context Switching
for Fast Storage IO Devices. ACM Transactions on Storage (TOS), 11(2), 5:1–5:25.
Globe-Newswire (2013). Global big data market in the financial services sector
2012–2016<http://www.cnbc.com/id/101124650>.
Habegger, B. (2010). Strategic foresight in public policy: Reviewing the experiences
of the UK, Singapore, and the Netherlands. Futures, 42(1), 49–58.
Harrison, C. (2012). Deal watch: ‘Big data’ deal for diabetes clinical trial modelling.
Nature Reviews Drug Discovery, 11(11), 822.
Harvey, C. (2012). 50 Top open source tools for big data : Datamation<http://
www.datamation.com/data-center/50-top-open-source-tools-for-big-data-1.
html>.
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Khan, S. U. (2015).
The rise of ‘‘big data” on cloud computing: Review and open research issues.
Information Systems, 47, 98–115.
Hazen, B. T., Boone, C. A., Ezell, J. D., & Jones-Farmer, L. A. (2014). Data quality for
data science, predictive analytics, and big data in supply chain management: An
introduction to the problem and suggestions for research and applications.
International Journal of Production Economics, 154, 72–80.
HDFS (2010). Facebook has the world’s largest hadoop cluster! : Facebook
Post<http://hadoopblog.blogspot.hk/2010/05/facebook-has-worlds-largesthadoop.html>.
Henschen, D. (2014). Merck optimizes manufacturing with big data analytics :
InformationWeek Connecting the Business Technology Community<http://
www.informationweek.com/strategic-cio/executive-insights-and-innovation/
merck-optimizes-manufacturing-with-big-data-analytics/d/d-id/1127901>.
Hill, D. (2012). IBM smarter storage: What a smart idea. Mesabi Group Commentary,
1–6.
IBM (2013). What is big data? – Bringing big data to the enterprise<http://www.
ibm.com/big-data/us/en/>.
IBM-SAP (2013). China Ocean Shipping (Group) Company surges into new markets
with IBM and SAP<http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?
subtype=AB&infotype=PM&appname=SNDE_SP_SP_CNEN&htmlfid=SPC03438
CNEN&attachment=SPC03438CNEN.PDF>.
IntelCenter (2013). Big data visualization: turning big data into big insights. White
Paper: 1–14.
Ji, C., Li, Y., Qiu, W., Awada, U., & Li, K. (2012). Big data processing in cloud
computing environments. In Proceedings of the 2012 12th international
symposium on pervasive systems, algorithms and networks. 13–15 December
(pp. 17–23). San Marcos, TX: IEEE Computer Society.
Ji, C. Q., Li, Y., Qiu, W. M., Jin, Y. W., Xu, Y. J., Awada, U., … Qu, W. Y. (2012). Big data
processing: Big challenges and opportunities. Journal of Interconnection
Networks, 13(03 and 04), 1–19.
Jimenez, D.-Z., Stires, C., Li, Q., Zhang, C., Sehgal, V., & Arora, R. (2013). Asia/Pacific
big data technology and services 2013–2017 analysis and forecast: the journey
to tech + transformation continues. Next Stop Is Innovation. Market Analysis,
1–50.
Jin, X., Zong, S., Li, Y., Wu, S., Yin, W., & Ge, W. (2015). A domain knowledge based
method on active and focused information service for decision support within
big data environment. Procedia Computer Science, 60, 93–102.
Jitkajornwanich, K., Gupta, U., Elmasri, R., Fegaras, L., & McEnery, J. (2013). Using
mapreduce to speed up storm identification from big raw rainfall data. In The
fourth international conference on cloud computing, GRIDs, and virtualization (pp.
49–55).
Joshi, K., & Yesha, Y. (2012). Workshop on analytics for big data generated by
healthcare and personalized medicine domain. In Proceedings of the 2012
conference of the center for advanced studies on collaborative research
(pp. 267–269). IBM Corp.
Kaisler, S., Armour, F., Espinosa, J. A., & Money, W. (2013). Big data: Issues and
challenges moving forward. In 2013 46th Hawaii international conference on
system sciences (HICSS), 7–10 January (pp. 995–1004). Wailea, Maui, HI: IEEE.
Kankanhalli, A., Hahn, J., Tan, S., & Gao, G. (2016). Big data and analytics in
healthcare: Introduction to the special section. Information Systems Frontiers, 18
(2), 233–235.
Kepner, J., Arcand, W., Bergeron, W., Bliss, N., Bond, R., Byun, C., … Kurz, J. (2012).
Dynamic distributed dimensional data model (D4M) database and computation
system. In 2012 IEEE international conference on acoustics, speech and signal
processing (ICASSP), Kyoto, 25–30 March (pp. 5349–5352). IEEE.
Khatri, H. (2013). Trends in manufacturing operations: Leveraging big data across the
value chain : PROFIT ORACLE Technology Powered. Business Driven<http://
www.oracle.com/us/corporate/profit/archives/opinion/011813-hkhatri1899121.html>.
Kim, G.-H., Trimi, S., & Chung, J.-H. (2014). Big-data applications in the government
sector. Communications of the ACM, 57(3), 78–85.
Koetsier, J. (2014). Hadoop founder says future of big data looks like, well, hadoop : VB
Insight<http://venturebeat.com/2014/04/10/hadoop-founder-says-future-ofbig-data-looks-like-well-hadoop/>.
Lardinois, F. (2014). Google LAUNCHES BigQuery streaming for real-time, big-data
analytics : Techcrunch<http://techcrunch.com/2014/03/25/google-launchesbigquery-streaming-for-real-time-big-data-analytics/>.
LaValle, S., Lesser, E., Shockley, R., Hopkins, M. S., & Kruschwitz, N. (2013). Big data,
analytics and the path from insights to value. MIT Sloan Management Review, 52
(2), 21–31.
Lee, C.-H., Wang, Y.-H., & Trappey, A. J. (2015). Ontology-based reasoning for the
intelligent handling of customer complaints. Computers & Industrial Engineering,
84, 144–155.
Li, J., Tao, F., Cheng, Y., & Zhao, L. (2015). Big data in product lifecycle management.
The International Journal of Advanced Manufacturing Technology, 81(1–4),
667–684.
Lin, J. (2013). Mapreduce is good enough? If all you have is a hammer, throw away
everything that’s not a nail! Big Data, 1(1), 28–37.
LMG (2014). Big potential in big data : Business Services<http://www.
royalmail.com/marketing-services-regular/marketreach>.
Lohman, T. (2013). Big data and analytics trends for 2014 : ZDNet<http://www.
zdnet.com/big-data-and-analytics-trends-for-2014-7000024260/>.
Loshin, D. (2013). Big data analytics: From strategic planning to enterprise integration
with tools, techniques, NoSQL, and graph : . Elsevier.
Lupton, D. (2013). The commodification of patient opinion: The digital patient
experience economy in the age of big data. Sydney Health & Society Group
Working Paper, (3), 1–18.
Lurie, A. (2014). 39 data visualization tools for big data : Cloud Computing<http://
blog.profitbricks.com/39-data-visualization-tools-for-big-data/>.
Madhavan, J., Balakrishnan, S., Brisbin, K., Gonzalez, H., Gupta, N., Halevy, A. Y., …
Lee, H. (2012). Big data storytelling through interactive maps. IEEE Data
Engineering Bulletin, 35(2), 46–54.
Markopoulos, J. (2012). 5 ways the industrial internet will change manufacturing :
Forbes. <http://www.forbes.com/sites/ciocentral/2012/11/29/5-ways-theindustrial-internet-will-change-manufacturing/>.
Marr, B. (2013). The awesome ways big data is used today to change our
world<https://www.linkedin.com/today/post/article/20131113065157-
64875646-the-awesome-ways-big-data-is-used-today-to-change-our-world>.
Martin, J., Moritz, G., & Frank, W. (2013). Big data in logistics a DHL perspective on
how to move beyond the hype. DHL Customer Solutions & Innovation, 1–30.
Mason, H. (2014). Inspiration day at the University of Waterloo Stratford,
Campus<http://www.betakit.com/event/inspiration-day-at-university-ofwaterloo-stratford-campus/>.
McKelvey, K., Rudnick, A., Conover, M. D., & Menczer, F. (2012). Visualizing
communication on social media: Making big data accessible. arXiv preprint
arXiv:1202.1367.
MeriTalk (2014). USPS tackles scale and speed in big data challenge : MeriTalk The
Government IT Network<http://www.meritalk.com/bdx-profile-atkins.php>.
Mesnier, M., Ganger, G. R., & Riedel, E. (2003). Object-based storage.
Communications Magazine, IEEE, 41(8), 84–90.
Mind-Commerce (2014). Big data in manufacturing: market analysis, case studies,
and forecasts 2014–2019. Market Research Reports, 1–52.
Miura, A., Komori, M., Matsumura, N., & Maeda, K. (2015). Expression of negative
emotional responses to the 2011 Great East Japan Earthquake: Analysis of big
data from social media. Shinrigaku kenkyu: The Japanese Journal of Psychology, 86
(2), 102–111.
590 R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591
Myers, M. (2014). New Imperial and KPMG institute will harness the
power of corporate data<http://www3.imperial.ac.uk/newsandeventspggrp/
imperialcollege/newssummary/news_15-7-2014-10-29-9>.
NEC (2012). Case Study Nippon Express Co., Ltd. Empowered by Innovation NEC
<http://www.nec.com/>, pp. 1–3.
Nedelcu, B. (2013). About big data and its challenges and benefits in manufacturing.
Database Systems Journal BOARD, IV(3), 10–19.
NewsOn6.com (2016). <http://www.newson6.com/story/31957754/big-datamarket-report-analysis-trends-size-share-opportunity-assessment-forecast-to2022>.
Noor, A. (2013). Putting big data to work. ASME Mechanical Engineering, 32–37.
O’Connell, M. (2010). Interactive clinical data review for safety assessment and trial
operations management. Phuse, 1–13.
O’Driscoll, A., Daugelaite, J., & Sleator, R. D. (2013). ‘Big data’, Hadoop and cloud
computing in genomics. Journal of Biomedical Informatics, 46(5), 774–781.
O’Connell, M. (2013). Object storage systems: The underpinning of cloud and big
data initiatives. SNIA Education, 1–35.
Peat, M. (2013). Big data in finance. InFinance: The Magazine for Finsia Members, 127
(1), 34.
Phneah, E. (2013). Toyota to roll out big data traffic service in Japan : ZDNet.
<http://www.zdnet.com/toyota-to-roll-out-big-data-traffic-service-in-japan7000016079/>.
President’s Council on National ICT Strategies (2011). Establishing a smart
government by using big data. Washington, DC.
Qu, Z. (2012). Semantic processing on big data. Advances in multimedia. Software
engineering and computing (Vol. 2 : Springer.
Raghupathi, W., & Raghupathi, V. (2014). Big data analytics in healthcare: Promise
and potential. Health Information Science and Systems, 2(1), 1–10.
Raman, A. C. (2014). Storage infrastructure for big data and cloud. Handbook of
research on cloud infrastructures for big data analytics, p. 110.
Ramos, L. (2015). Semantic Web for manufacturing, trends and open issues: Toward
a state of the art. Computers & Industrial Engineering, 90, 444–460.
Rath, J. (2013). USPS leverages big data to fight fraud : Big Data, HPC<http://www.
datacenterknowledge.com/archives/2013/05/01/usps-leverages-big-data-tofight-fraud/>.
Reijers, H. A. (2003). Design and control of workflow processes: Business process
management for the service industry : . Springer-Verlag.
Richtárik, P., & Takácˇ, M. (2015). Parallel coordinate descent methods for big data
optimization. Mathematical Programming, 156(1), 1–52.
Rouse, M. (2012). Big data analytics. Essential guide <http://searchbusinessanalytics.
techtarget.com/definition/big-data-analytics>.
Ryan, A. (2012). Under the hood: Hadoop distributed filesystem reliability with
namenode and avatarnode : Facebook Post<https://www.facebook.com/
notes/facebook-engineering/under-the-hood-hadoop-distributed-filesystemreliability-with-namenode-and-avata/10150888759153920>.
Salehan, M., & Kim, D. J. (2016). Predicting the performance of online consumer
reviews: A sentiment mining approach to big data analytics. Decision Support
Systems, 81, 30–40.
Sawhney, S., Puri, H., Van Rietschote, H. (2013). Systems and methods for using
cloud-based storage to optimize data-storage operations, Google Patents.
Schneeweiss, S. (2016). Improving therapeutic effectiveness and safety through big
healthcare data. Clinical Pharmacology & Therapeutics, 99(3), 262–265.
Schultz, T. (2013). Turning healthcare challenges into big data opportunities: A usecase review across the pharmaceutical development lifecycle. Bulletin of the
American Society for Information Science and Technology, 39(5), 34–40.
Sellars, S., Nguyen, P., Chu, W., Gao, X., Hsu, K. L., & Sorooshian, S. (2013).
Computational earth science: Big data transformed into insight. Eos,
Transactions American Geophysical Union, 94(32), 277–278.
Siemens (2014). Big data pays big in power plant operation<http://www.energy.
siemens.com/hq/en/energy-topics/energy-stories/rds-tahaddart.htm>.
Spoorthy, V., Mamatha, M., & Kumar, B. S. (2014). A survey on data storage and
security in cloud computing. International Journal of Computer Science and
Mobile Computing, 3(6), 306–313.
Spotfire (2013). Big data in manufacturing: Rise of the machine : TIBCO Spotfire’s
Trends and Outliers Blog<http://spotfire.tibco.com/blog/?p=20446>.
Stateczny, A., & Wlodarczyk-Sielicka, M. (2014). Self-organizing artificial neural
networks into hydrographic big data reduction process. Rough sets and intelligent
systems paradigms (pp. 335–342). Switzerland: Springer International
Publishing.
Stedman, C. (2014). Enterprises take a long view on big data programs and
purchases. TechTarget Search Business Analytics <http://searchbusinessanalytics.
techtarget.com/news/2240217365/Enterprises-take-a-long-view-on-big-dataprograms-and-purchases>.
Stephen, C., Serguei, N., & Arnd, H. (2014). When big data meets manufacturing.
INSEAD, The Business School for the World <http://knowledge.insead.
edu/operations-management/when-big-data-meets-manufacturing-3297>.
Strohbach, M., Daubert, J., Ravkin, H., & Lischka, M. (2016). Big data storage. New
horizons for a data-driven economy (pp. 119–141). Switzerland: Springer
International Publishing.
Swaminathan, S. (2012). The effects of big data on the logistics industry : PROFIT
ORACLE Technology Powered. Business Driven<http://www.oracle.com/
us/corporate/profit/archives/opinion/021512-sswaminathan-1523937.html>.
Syed, A. R., Gillela, K., & Venugopal, C. (2013). The future revolution on big data.
International Journal of Advanced Research in Computer and Communication
Engineering, 2(6), 2446–2451.
Talia, D. (2013). Toward cloud-based big-data analytics. IEEE Computer Science,
98–101.
Tripathy, B., & Mittal, D. (2016). Hadoop based uncertain possibilistic kernelized cmeans algorithms for image segmentation and a comparative analysis. Applied
Soft Computing, 46, 886–923.
Tsuchiya, S., Sakamoto, Y., Tsuchimoto, Y., & Lee, V. (2012). Big data processing in
cloud environments. FUJITSU Science and Technology Journal, 48(2), 159–168.
Vera-Baquero, A., Colomo-Palacios, R., & Molloy, O. (2013). Business process
analytics using a big data approach. IT Professional, 1–9.
Versace, M., & Karen, M. (2012). The case for big data in the financial services
industry. IDC Financial Insights, 1–13.
Wang, C. H. (2015). Using the theory of inventive problem solving to brainstorm
innovative ideas for assessing varieties of phone-cameras. Computers &
Industrial Engineering, 85, 227–234.
Wang, G., Gunasekaran, A., Ngai, E. W., & Papadopoulos, T. (2016). Big data analytics
in logistics and supply chain management: Certain investigations for research
and applications. International Journal of Production Economics, 176, 98–110.
Wang, C., Rayan, I. A., & Schwan, K. (2012). Faster, larger, easier: reining real-time
big data processing in cloud. Proceedings of the Posters and Demo Track (Vol. 4,
pp. 1–2). ACM.
Weng, W. H., & Weng, W. T. (2013). Forecast of development trends in big data
industry. In Proceedings of the institute of industrial engineers Asian conference
2013 (pp. 1487–1494). Singapore: Springer Science + Business Media.
Wu, K., Bethel, W., Gu, M., Leinweber, D., & Ruebel, O. (2013). A big data approach to
analyzing market volatility. Algorithm Finance, 2(3–4), 241–267.
Wu, X., Zhu, X., Wu, G. Q., & Ding, W. (2014). Data mining with big data. IEEE
Transactions on Knowledge and Data Engineering, 26(1), 97–107.
Xu, C., Goldstone, R., Liu, Z., Chen, H., Neitzel, B., & Yu, W. (2016). Exploiting
analytics shipping with virtualized MapReduce on HPC backend storage servers.
IEEE Transactions on Parallel and Distributed Systems, 27(1), 185–196.
Xu, J., & Güting, R. H. (2013). A generic data model for moving objects.
Geoinformatica, 17(1), 125–172.
Zakir, J., Seymour, T., & Berg, K. (2015). Big data analytics. Issues in Information
Systems, 16(2), 81–90.
Zhong, R. Y., Dai, Q. Y., Qu, T., Hu, G. J., & Huang, G. Q. (2013). RFID-enabled real-time
manufacturing execution system for mass-customization production. Robotics
and Computer-Integrated Manufacturing, 29(2), 283–292.
Zhong, R. Y., Huang, G. Q., & Dai, Q. Y. (2014). A big data cleansing approach for ndimensional RFID-Cuboids. In Proceeding of the 2014 IEEE 18th international
conference on computer supported cooperative work in design (CSCWD 2014), 21–
23 May, Taiwan (pp. 289–294).
Zhong, R. Y., Huang, G. Q., Dai, Q. Y., & Zhang, T. (2013). Mining trajectory
knowledge from RFID-enabled logistics data. In Proceeding of 43rd international
conference on computers and industrial engineering (CIE43), 16–18 October, Hong
Kong, [34]-1-12 .
Zhong, R. Y., Huang, G. Q., Lan, S. L., Dai, Q. Y., Xu, C., & Zhang, T. (2015). A big data
approach for logistics trajectory discovery from RFID-enabled production data.
International Journal of Production Economics, 165, 260–272.
Zhong, R. Y., Lan, S. L., Xu, C., Dai, Q. Y., & Huang, G. Q. (2016). Visualization of RFIDenabled shopfloor logistics big data in cloud manufacturing. The International
Journal of Advanced Manufacturing Technology, 84(1), 5–16.
Zhong, R. Y., Xu, C., Chen, C., & Huang, G. Q. (2015). Big data analytics for physical
internet-based intelligent manufacturing shop floors. International Journal of
Production Research, 1–12.
Zikopoulos, P., & Eaton, C. (2011). Understanding big data: Analytics for enterprise
class hadoop and streaming data : McGraw-Hill Osborne Media.
R.Y. Zhong et al. / Computers & Industrial Engineering 101 (2016) 572–591 591


Get Professional Assignment Help Cheaply

Buy Custom Essay

Don't use plagiarized sources. Get Your Custom Essay on
Challenges, opportunities, and future perspectives
Just from $10/Page
Order Essay

Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?

Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.

Why Choose Our Academic Writing Service?

  • Plagiarism free papers
  • Timely delivery
  • Any deadline
  • Skilled, Experienced Native English Writers
  • Subject-relevant academic writer
  • Adherence to paper instructions
  • Ability to tackle bulk assignments
  • Reasonable prices
  • 24/7 Customer Support
  • Get superb grades consistently

Online Academic Help With Different Subjects

Literature

Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.

Finance

Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.

Computer science

Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!

Psychology

While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.

Engineering

Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.

Nursing

In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.

Sociology

Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.

Business

We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!

Statistics

We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.

Law

Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.

What discipline/subjects do you deal in?

We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.

Are your writers competent enough to handle my paper?

Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.

What if I don’t like the paper?

There is a very low likelihood that you won’t like the paper.

Reasons being:

  • When assigning your order, we match the paper’s discipline with the writer’s field/specialization. Since all our writers are graduates, we match the paper’s subject with the field the writer studied. For instance, if it’s a nursing paper, only a nursing graduate and writer will handle it. Furthermore, all our writers have academic writing experience and top-notch research skills.
  • We have a quality assurance that reviews the paper before it gets to you. As such, we ensure that you get a paper that meets the required standard and will most definitely make the grade.

In the event that you don’t like your paper:

  • The writer will revise the paper up to your pleasing. You have unlimited revisions. You simply need to highlight what specifically you don’t like about the paper, and the writer will make the amendments. The paper will be revised until you are satisfied. Revisions are free of charge
  • We will have a different writer write the paper from scratch.
  • Last resort, if the above does not work, we will refund your money.

Will the professor find out I didn’t write the paper myself?

Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.

What if the paper is plagiarized?

We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.

When will I get my paper?

You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.

Will anyone find out that I used your services?

We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.

How our Assignment Help Service Works

1. Place an order

You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.

2. Pay for the order

Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.

3. Track the progress

You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.

4. Download the paper

The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.

smile and order essay GET A PERFECT SCORE!!! smile and order essay Buy Custom Essay


Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
error: Content is protected !!
Open chat
1
Need assignment help? You can contact our live agent via WhatsApp using +1 718 717 2861

Feel free to ask questions, clarifications, or discounts available when placing an order.
  +1 718 717 2861           + 44 161 818 7126           [email protected]
 +1 718 717 2861         [email protected]