The role of Chief Data Officer (CDO) would seem to be a godsend to answer the data monetization challenge. They should be the catalyst in helping organizations to become more effective at leveraging data and analytics to power the digital transformation. However, all is not well in the world of the CDO. Many organizations appoint a CDO with an Information Technology (IT) background – the same background and experience as the Chief Information Officer (CIO). The organization then ends up splitting the existing CIO role between the current CIO and the CDO; giving the CDO the tasks associated with data collection, governance, protection and access. Splitting the existing CIO role isn’t sufficient. Instead, the CDO needs a totally different charter than the CIO, and a key aspect of that charter must be around data monetization.
Microservices Architecture (MSA) is reshaping the enterprise IT ecosystem. It started as a mechanism to break the large monolithic applications into a set of independent, functionality focused applications which can be designed, developed, tested and deployed independently. The early adopters of MSA have used this pattern to implement their back-end systems or the business logic. Once they have implemented these so-called back-end systems, then came the idea of implementing the same pattern across the board. The idea of this article is to discuss the possible solution patterns which can be used in an MSA driven enterprise. ... On top of the back-end systems, there is the integration layer which interconnects heterogeneous back-end systems. Once these services are integrated, they need to be exposed as APIs to internal and external users as managed APIs through API management layer. Security and analytics will be used across all those 3 layers.
Paperwork is an integral part of doing business, but physical paper is not. In fact, reliance on paper documents results in costs, which could be eliminated or at least radically reduced by going paperless. As physical paperwork piles up, so do issues such as: a) Slower time to complete routine tasks that rely on paper as an input b) Increased risk of a security breach through lost or stolen documents c) Potential for data entry errors from manually keying information into systems d) Costs for office or offsite space to store paper documents. ... Flexibility has become a business imperative with the upsurge of new technologies, BYOD and more employees working remotely. Keeping this in mind, CIOs will focus on compatibility - the ability to scale and transcend devices and platforms (i.e. the open network) for enhanced collaboration. Integration capabilities will be a basic requirement for any technological implementation.
It is true that many of the competitions you see on Kaggle these days contain unstructured data that lends itself to Deep Learning algorithms like CNNs and RNNs. Anthony Goldbloom, the founder and CEO of Kaggle observed that winning techniques have been divided by whether the data was structured or unstructured. Regarding structured data competitions, Anthony says “It used to be random forest that was the big winner, but over the last six months a new algorithm called XGboost has cropped up, and it’s winning practically every competition in the structured data category.” More recently however, Anthony says the structured category has come to be dominated by what he describes as ‘hand crafted’ solutions heavy on domain knowledge and stochastic hypothesis testing. When the data is unstructured, it’s definitely CNNs and RNNs that are carrying the day.
"We have reached the tipping point where adoption of machine learning in the enterprise is poised to accelerate, and will drive improved business operations, better decision making and provide enhanced or entirely new products and services," said Paul Sallomi, vice chairman of Deloitte. ML, a core element of artificial intelligence, will progress "at a phenomenal pace," according to the study. "As impressive as it is today, in 50 years' time the ML abilities of 2018 will be considered baby steps in the history of this technology," the report said. The report highlights areas that Deloitte thinks will unlock more intensive use of ML in the enterprise by making it easier, cheaper and faster. The most important key area is the growth in new semiconductor chips that will increase the use of ML, enabling applications to use less power, and at the same time become more responsive, flexible and capable.
Naturally, the bigger the scale of the enterprise, the harder it’s going to be to keep IRM consistent. Many software packages and internal procedures are easy to maintain when you only have a few dozen people to worry about. The more people you add to a system, the more points of vulnerability you’ll contend with, and the less secure and less consistent your practices will become. If you want your company’s information to be safe, you need to take IRM more seriously. You should consider establishing a partnership with an IRM organization, or relying on products that give you more control over your own internal IRM. Your documents, messages, and files are the lifeblood of your organization, and all it takes is one breach to compromise your work. Don’t let it happen on your watch; invest in the right infrastructure for IRM, and don’t let it become a secondary priority.
"To assess if a candidate can be successful as a data scientist, I'm looking for a few things: baseline knowledge of the fundamentals, a capacity to think creatively and scientifically about real-world problems, exceptional communication about highly technical topics, and constant curiosity," said Kevin Safford, senior director of engineering at Umbel. Demonstrating that you have a strong understanding of the business at hand and how data can be used to reach business goals will also set you apart. "In addition to many technical questions—knowing your algorithms, knowing your math—a great data scientist must know the business and be able to bring strong ideas to the table," said Rick Saporta, head of data science at Vydia. "When hiring, I would rather have one creative data scientist who has a strong understanding of our business, than a whole team of machine learning experts who will be in a constant 'R&D' mode."
The common symptoms of islands of implementation are an incorrect use of technology standards, usability and interoperability issues, excessive cost and time escalations due to changing business needs. The root cause for this is typically around not having enterprise level standards, organizational structures leading to poor communications, inappropriate trained resources deployed in projects. But these can also occur during corporate mergers, acquisitions or due to vendor-lock ins. ... The root causes can be due to lack of architectural vision, technological disruptions, tight coupling, insufficient use of metadata, lack of abstraction layer etc. Use of component architectures that provides flexible substitutions of software modules due to fast-changing business/technology landscapes can solve this issue.
First there’s the issue of "attribution." How do you correctly identify your attacker? It’s not as easy as it sounds. What if an attack comes from a botnet? Not one computer, but thousands or millions spread over the globe. Owners of botnet computers may not know they’re contributing to an attack. If your attacker is somewhere in the cloud, good luck finding her. Are you going to strike back against your cloud provider? They’re potentially innocent middlemen. Second, ACDC wouldn’t allow striking back against distributed denial-of-service (DDoS) attacks, for example, a common attack. DDoS attacks don’t involve unauthorized access. And who are you going to blame? Typical DDoS attacks come from devices that are part of the Internet of Things (IoT). Say Grandma’s digital picture frame routed requests in a DDoS attack. Are you going to hack back against Grandma?
GDPR is only interested in personally identifiable information (PII). GDPR does not apply to data that is not attached to a person, such as product or accounting information. You might still classify it as sensitive and might still want to protect it, but GDPR considers it non-PII data and ignores those situations. GDPR identifies two classes of PII data. There is data that can be used to uniquely identify a person like social-security number, e-mail address, or anything directly connected to these identifiers such as purchase history. Then there is extra-sensitive data such as medical/health information, religion, sexual orientation, or any information on/collected from a minor. Do note that according to GDPR, combinations of information that may not be unique in isolation can potentially identify an Individual. So PII also includes identities that may be deduced from values like postcode, travel, or multiple locations such as places of purchase.
Quote for the day:
"Learn from the mistakes of others. You can never live long enough to make them all yourself." ― Groucho Marx