Daily Tech Digest - December 07, 2019

Why a computer will never be truly conscious


Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier. Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons.


cloud server rack
One of the six customers impacted by the ransomware infection is FIA Tech, a financial and brokerage firm. Teh ransomware caused on outage of FIA Tech cloud services. In a message to customers, FIA Tech said "the attack was focused on disrupting operations in an attempt to obtain a ransom from our data center provider." FIA Tech did not name the data center provider, but a quick search identifies it as CyrusOne. We've been told by a source close to CyrusOne that the data center provider does not intend to pay the ransom demand, barring any future unforeseen developments. The company owns 45 data centers in Europe, Asia, and the Americas, and has more than 1,000 customers. It is also considering a sale after receiving takeover interest over the summer, according to Bloomberg. CyrusOne is a publicly-traded, NASDAQ-listed company. In an SEC filing last year, the company explicitly listed "ransomware" as a risk factor for its business.


Costs are likely to continue to improve as, among other things, companies reduce the level of pricey cobalt in battery components and achieve manufacturing improvements as production volumes rise. But metals mining is already a mature process, so further declines there are likely to slow rapidly after 2025 as the cost of materials makes up a larger and larger portion of the total cost, the report finds. Deeper cost declines beyond 2030 are likely to require shifts from the dominant lithium-ion chemistry today to entirely different technologies, like lithium-metal, solid-state and lithium-sulfur batteries. Each of these are still in much earlier development stages, so it’s questionable whether any will be able to displace lithium-ion by 2030, Field says. Gene Berdichevsky, chief executive of anode materials maker Sila Nanotechnologies, agrees it will be hard for the industry to consistently break through the $100/kWh floor with current technology. But he also thinks the paper discounts some of the nearer-term improvements we’ll see in lithium-ion batteries without full-fledged shifts to different chemistries.


Banking as a Platform- The Future is Now!

No matter what type or size a platform offering may be, some of the following will be a must. Embedded analytics which runs like an undercurrent and omnipresent will become a hygiene requirement to have and will also play a major role in revenue generation and profitability from the platform. AI and ML will be key differentiators in enhancing user experience and operational efficiency too leading to monetary benefits. BaaP’s DNA will be defined by how well is the API strategy of the bank and a complete agility in usage of APIs will be the new norm from the business teams. Scaling, multiple usage, Data privacy and cyber security compounded with regulatory guidelines will be quite crucial for the smooth and safe functioning of BaaP and these two aspects will be central to any decision making by banks. It may sound little too audacious to talk about future of platform banking which is still in nascent stage. But history always serves a great recipe to predict future (we are taking the data analytics route!).


AI Policies Are Setting Stage To Transform Healthcare But More Is Needed

AI
New data standards proposals by Health and Human Services will empower patients and lead to better, faster diagnosis. The proposal would require electronic medical record (EMR) companies to provide portals called APIs for patients to access and share their health data. Currently it is ridiculously complex and expensive for patients to get copies of their own records. Shockingly, it may cost over $500 to get your medical record. Accessibility and data sharing are critical for better, faster diagnosis and treatments. AI has been used to predict heart attacks five years into the future. It is also able to predict who is at the greatest risk for suffering from depression. The new standards would put the patient in control of who uses their data and for what purposes. Entrepreneurial companies are leveraging venture dollars to build the best AI capabilities in the world, but they need access to health data to prove their benefits to patients. Patients should be able to choose how their data is used.


Why A Human Firewall Is The Biggest Defence Against Data Breach

Hackers are targeting servers that haven’t been set up correctly, giving them access to sensitive data with minimal effort. Cloud-based systems such as Office365 don’t have multi-factor authorisation, or web-based systems that are not patched result in vulnerabilities that can be exploited. Also, sometimes hardware such as firewalls can be configured incorrectly, or poor security settings on individual devices, can lead to loopholes that can be exploited. ... A hacker only needs to gain access to one user’s account, to then gain control and access the compromised network and data. An approach known as the “known good” model works in a way where anomalies that stray from the established normal baseline are identified and highlighted as a potential threat and cyber-attack. Business leaders are widely criticised and held accountable for failing to protect their consumer’s data especially in the light of the vast IT and training budgets that are at their disposal, yet it is the daily performance of front-line staff that reveal the true strengths and weaknesses within any organisation.


Two Russians indicted over Dridex and Zeus malware


“Sitting quietly at computer terminals far away, these cyber criminals allegedly stole tens of millions of dollars from unwitting members of our business, non-profit, governmental, and religious communities. “Each and every one of these computer intrusions was, effectively, a cyber-enabled bank robbery. We take such crimes extremely seriously and will do everything in our power to hold these criminals to justice.” The losses incurred through the activities of Yakubets’ group – known as Evil Corp – totalled hundreds of millions of pounds in both the UK, the US, and other countries. Additional investigations in the UK targeted a network of money launderers who funnelled profits back to Evil Corp, for which eight people have already gone to prison. Other intelligence supplied through UK law enforcement has helped support sanctions brought against the group by the US Treasury’s Office of Foreign Asset Control. The NCA described the operation as a sophisticated and technically skilled one, which represented one of the most significant cyber crime threats ever faced in the UK.


Your Privacy Could Be at Risk Without These Updates to Behavioral Biometrics


Mastercard is one of the major brands investing in passive biometrics. The goal is to determine the probability that the authenticated user is present during the respective interactions. The credit card provider’s system evaluates more than 300 signals to make a conclusion. They include how a person navigates around a site on their device or the amount of pressure they put on a touch-sensitive screen. Passive behavioral biometrics measurements also allow catching some strange behaviors that might not immediately become apparent through small samples of data. For example, if a person typically uses the scroll wheel on a mouse to navigate, but then switches to using keyboard commands, that change could indicate someone else has gotten access to a system and is using it fraudulently. Keep in mind that passive and active biometrics both have associated pros and cons. No single method works best in every case. However, the use of passive biometrics to gauge probabilities is relatively new. Since well-known brands like Mastercard are working with it, there’s a good chance this option will become even more prominent.


FBI recommends that you keep your IoT devices on a separate network

google-home-mini-smartphone.jpg
"Your fridge and your laptop should not be on the same network," the FBI's Portland office said in a weekly tech advice column. "Keep your most private, sensitive data on a separate system from your other IoT devices," it added. ... The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a "smart" device will not grant an attacker a direct route to a user's primary devices -- where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers. The smarter way is to use "micro-segmentation," a feature found in the firmware of most WiFi routers, which allows router admins to create virtual networks (VLANs). VLANs will behave as different networks; even they effectively run on the same router. While isolating IoT devices on their own network is the best course of action for both home users and companies alike, this wasn't the FBI's only advice on dealing with IoT devices.


Usability Testing and Testing APIs with Hallway Testing

Hallway testing can be described as using "random" persons or group of people to test software products and interfaces. "Randomness" of a person depends on what we are trying to test. Marchewka suggested trying to engage people who will be using the product (i.e. members of the target group) to get the best understanding of how they will do that. For their hallway testing session they invite a truly random group of people if they are checking mobile app, and a random group of API users if they are verifying UX of an API. Using the specific background and experience of all the people taking part in a particular session of hallway testing, we can uncover all inefficiencies of the user interface in a tested product, said Marchewka. The app or software does not need to have a GUI to benefit from hallway testing; it can be used as part of API prototyping activity, as Marchewka explained. Consumers of API can be asked to use an early version during a hallway testing session; for example, creators can find out if methods are named correctly.



Quote for the day:


"Courage is leaning into the doubts and fears to do what you know is right even when it doesn't feel natural or safe." -- Lee Ellis


Daily Tech Digest - December 05, 2019

The Rise Disappearance And Retirement Of Google Co-Founders

USA - Techonology - Google Introduces T-Mobile G1
It’s a fitting end for two of the most mysterious tech leaders of a generation, who are both exiting their company as it hovers near $1 trillion in market cap. But it’s also a troubling time for Google. The search giant has faced increasing scrutiny from employees, media organizations, activists, regulators, and lawmakers since Page and Brin first stepped back in the summer of 2015. And many of those controversies are problems of Page and Brin’s creation, either because the duo didn’t foresee the ways in which Google could do harm or because they explicitly steered the company in a direction that flouted standard corporate ethics. In that context, it’s important to look back at the big moments in both men’s careers and how the actions they took have had an outsized impact not just on the tech industry, but on the internet and society itself. What Page and Brin have built will likely last for decades to come, and knowing how Google got to where it is today will be an important piece in the puzzle of figuring out where it goes in the future. ... Although Google is now one of the most powerful forces in online advertising on the planet, Page and Brin weren’t too keen on turning their prototype search engine into an ad-selling machine, at first.



Planning for an intelligent future


Compared to current planning activities, which invariably work on pre-defined cycles such as weekly or monthly processes, intelligent planning can be considered to have more of an ‘always-on’ approach. ... As such, any business that has access to data that exceeds the volume that humans can analyse and understand, will need intelligent planning to remain competitive. For example, a large retail organisation can harvest data from millions of daily transactions to make better buying, customer engagement, and operational decisions. But they don’t need to stop at short-term future actions; instead they should consider using social media sentiment and detailed demographics to make longer term, strategic decisions around areas such as range, store locations and customer experience. Financial services is another prime candidate for intelligent planning, particularly where understanding and influencing consumer behaviour is involved, for anything from calculating the probability of a customer renewing their insurance policy; the likelihood of a loan holder defaulting on their payments; or the future spending profile of credit card customers.


How AI is Transforming the Banking Sector


Banks are always under intense pressure from regulatory bodies to enforce the most recent regulations. These regulations are there to protect the banks and customers from fraudulent activities while at the same time, reducing financial crimes like money laundering, tax evasion, and terrorism financing. AI in banking also helps ensure that banks are compliant with the most recent regulations. AI relies on cognitive fraud analytics that watches customer behaviors, track transactions, recognize dubious activities and assess the data of different compliance systems. Businesses can remain up to date with compliance rules and regulations through the use of AI. AI systems can read compliance requirements and detect any changes in the requirements through deep learning and natural language processing. Through this, banks can remain on top of ever-evolving regulatory requirements and align their own regulations with them. Through technologies like analytics, deep learning, and machine learning, banks can remain compliant with regulations.


Augmented reality in retail: Virtual try before you buy

“Nike Fit is a transformative solution and an industry first—using a digital technology to solve for massive customer friction,” Nike writes in its press release for the launch of the app. “In the short term, Nike Fit will improve the way Nike designs, manufactures, and sells shoes—product better tailored to match consumer needs. A more accurate fit can contribute to everything from less shipping and fewer returns to better performance.” ... “The fashion industry has not traditionally been geared toward helping people understand how clothes will actually fit,” the company writes in its press release. “Gap is committed to winning customer trust by consistently presenting and delivering products that make customers look and feel great, and we are using technology to get there.” ... As the technology evolves and gives users more and more accurate renderings of how digital objects look in physical spaces, I expect that more and more brands and industries will hop onto the AR marketing bandwagon. From fashion and accessories to footwear and home décor, and beyond, AR has the potential to transform and completely reimagine customer experiences.


Why the sheer scale of DevOps testing now needs machine learning


The potential value of machine learning is particularly evident in mobile and web app testing because these are very fragmented and complex platforms to handle and understand. What ML can do in this context is to keep all those platforms visible, connected, and in a ready-state mode. In a test lab, ML helps to surface when something is outdated, disconnected from WiFi, or another problem – and moreover, help understand why that has happened. Another way in which ML helps is through showing trends and patterns, helping to not only visualise all that data but provide further insight and make sense of what has happened over the past weeks or months. For instance, it can identify the most problematic functional area in an application, such as the top 5 failing tests over the past 2-3 testing cycle, or which mobile/web platforms have been most error-prone over the past cycles. Was a failure caused by the lab, was it a pop-up, or a security alert? This really matters. Teams invest time, resources and money in automating test activities, but where all this really has an impact and add value is at the reporting stage.


The 10 most important cyberattacks of the decade


Yahoo deserves the first mention because of the sheer size of its breach and the damaging effect it had on the company's ability to compete as an email and search engine platform. In 2013, all three billion of Yahoo's accounts were compromised, making the breach the largest in the history of the internet. It took the company three years to notify the public that everyone's names, email addresses, passwords, birth dates, phone numbers and security answers had been sold on the Dark Web by hackers.  Security experts say the Yahoo breach is notable because of how it was mishandled by the company and the devastating effect it had on Verizon's $4.8 billion acquisition. Yahoo initially discovered that a breach occurred in 2015 exposing 500 million accounts. ... The size of the Equifax breach pales in comparison to the value of the data exposed to hackers. As one of America's largest credit bureaus, the company had the most sensitive data on hundreds of millions of people. Hackers gained access to the information of 143 million Equifax customers, including their names, birth dates, drivers' license numbers, Social Security numbers and addresses. More than 200,000 credit card numbers were released and 182,000 documents with personally identifying information was accessed by cybercriminals.


To stop a tech apocalypse we need ethics and the arts


Matt Reaney, the chief executive and founder of Big Cloud – a recruitment company that specialises in data science, machine learning and AI employment – has argued that technology needs more people with humanities training. [The humanities] give context to the world we operate in day to day. Critical thinking skills, deeper understanding of the world around us, philosophy, ethics, communication, and creativity offer different approaches to problems posed by technology. Reaney proposes a “more blended approach” to higher education, offering degrees that combine the arts and STEM. Another advocate of the interdisciplinary approach is Joseph Aoun, President of Northeastern University in Boston. He has argued that in the age of AI, higher education should be focusing on what he calls “humanics”, equipping graduates with three key literacies: technological literacy, data literacy and human literacy. The time has come to answer the call for humanities graduates capable of crossing over into the world of technology so that our human future can be as bright as possible.


Learning algorithms and the self-supervised machine with Dr. Philip Bachman

Supervised learning is sort of what’s had the most immediate success and what’s driving a lot of the deep learning power technologies that are being used for doing things like speech recognition in phones or doing automated question answering for chat bots and stuff like that. So supervised learning refers to kind of a subset of the techniques that people apply when they have access to a large amount of data and they have a specific type of action that they want a model to perform when it processes that data. And what they do is, they get a person to go and label all the data and say, okay, well this is the input to the model at this point in time. And given this input, this is what the model should output. So you’re putting a lot of constraints on what the model is doing and constructing those constraints manually by having a person looking at a set of a million images and, for each image, they say, oh, this is a cat, this is a dog, this is a person, this is a car.


What a cloud-native approach to RPA could mean to your business


Enterprise’s affinity to cloud computing hasn’t traditionally been reflected by the RPA industry. That is, until now – with the world’s first cloud-native RPA platform, we’re bringing the advantages of cloud-native, intelligent RPA deployments to organisations worldwide. For business users, cloud-native RPA operates as a self-service technology accessed via a web-based graphical interface from anywhere. With a single click or drag-and-drop motion, users can automate those parts of any job that don’t require human creativity, problem-solving capabilities, empathy, or judgment. Just as with popular Software-as-a-Service (SaaS) apps, users can create what they need using an intuitive web interface within the browser. For many common bots, no coding is required. There are no large client downloads to install and manage or commands to memorise; automation and processes are exposed via drag-and-drop functionality and flow charts. Also, because there is no software client, IT doesn’t have to get involved. Infrastructure management costs go away, significantly reducing the total cost of ownership (TCO). 


cyber security abstract wall of locks padlocks
At an even more fundamental level, anyone looking at the state of enterprise security today understands that whatever we’re doing now isn’t working. “The perimeter-based model of security categorically has failed,” says Forrester principal analyst Chase Cunningham. “And not from a lack of effort or a lack of investment, but just because it’s built on a house of cards. If one thing fails, everything becomes a victim. Everyone I talk to believes that.” Cunningham has taken on the zero-trust mantle at Forrester, where analyst Jon Kindervag, now at Palo Alto Networks, developed a zero-trust security framework in 2009. The idea is simple: trust no one. Verify everyone. Enforce strict access-control and identity-management policies that restrict employee access to the resources they need to do their job and nothing more. Garrett Bekker, principal analyst at the 451 Group, says zero trust is not a product or a technology; it’s a different way of thinking about security. “People are still wrapping their heads around what it means. Customers are confused and vendors are inconsistent on what zero trust means. But I believe it has the potential to radically alter the way security is done.”



Quote for the day:


"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe


Daily Tech Digest - December 04, 2019

10 bad programming habits we secretly love

9 bad programming habits we secretly love
For the last decade or so, the functional paradigm has been ascending. The acolytes for building your program out of nested function calls love to cite studies showing how the code is safer and more bug-free than the older style of variables and loops, all strung together in whatever way makes the programmer happy. The devotees speak with the zeal of true believers, chastising non-functional approaches in code reviews and pull requests. They may even be right about the advantages. But sometimes you just need to get out a roll of duct tape. Wonderfully engineered and gracefully planned code takes time, not just to imagine but also to construct and later to navigate. All of those layers add complexity, and complexity is expensive. Developers of beautiful functional code need to plan ahead and ensure that all data is passed along proper pathways. Sometimes it’s just easier to reach out and change a variable. Maybe put in a comment to explain it. Even adding a long, groveling apology to future generations in the comment is faster than re-architecting the entire system to do it the right way.



New AI.jpg
Advancements in explainable AI will continue in 2020 and beyond as new standards are developed around the technical definition of explainability, slowly followed by new technologies to address the explainability problem for business leaders non-technical audiences. In real estate, for example, offering a compelling explanation for why a mortgage application was rejected by an AI-driven platform will eventually be a necessity as AI adoption continues. Although we’ll see evolving technical tools and standards, progress for layperson tools will be slower with some narrow and domain-specific solutions (e.g., non-technical explainability for finance) emerging first. Like the general public’s understanding of ‘the web’ in the 90s, awareness, understanding and trust in AI will gradually increase as the capabilities and use of the technology spreads. Using sophisticated tooling to automate what we would call human creativity is now commonly referred to as AI. However, the term has become almost meaningless as “AI” as now covers everything from predictive analytics to Amazon Echo speakers. The industry needs to get their arms around real AI.

Volkswagen Is Accelerating One Of The World’s Biggest Smart-Factory Projects

VW factory
The biggest challenge, says Jean-Pierre Petit, Capgemini’s director of digital manufacturing, in an emailed comment to Forbes, is to “cross the chasm” from an initial pilot in a single plant to full-scale deployments, which is where the real benefits of digitization kick in. In particular, smart-factory projects require IT teams to work closely with “operational technology” (OT) groups managing machinery and other tech inside factories. Often, OT teams have become used to working quite independently and may resist IT’s efforts to drive change. By working closely together on VW’s industrial cloud project, Hofmann and Walker are sending a strong signal to their respective teams about the need for tight collaboration. The decision to launch pilots at several factories this year rather than just one was also deliberate. “You can put a ton of slides up [about the industrial cloud], but nobody is interested in that,” says Dirk Didascalou, one of the senior AWS executives involved in the project. “They need to see it working first.”



The question that helps businesses overcome unconscious bias

In the workplace, when you’re considering someone for a project or a promotion, turn that mantra into a question: What do I know about this person? You may have a feeling that this person is someone you do or don’t like or connect with, or a sense that this person “is ready for” and “deserves” the opportunity. Guided by that sense, you can easily pick and choose facts from their experience and work records to reinforce your decision. But when you start only with facts, a different picture can emerge. So drill down exclusively on what’s concrete. What projects did this person take part in or help lead, and how successful were they? What do the 360-degree assessments of this person show? What demonstrable impact did this person’s work have on sales, revenues, morale? Sometimes, the facts will back up a general sense that you have, or a description that someone else gave you. 


Programmers and developer teams are coding and developing software
It's almost a cliché to point out how so much of software today is built on or with open source. But Ian Massingham recently reminded me that for all the attention we lavish on back-end technologies--Linux, Docker containers, Kubernetes, etc.--front-end open source technologies actually claim more developer attention.  Much of the front-end magic open source software that developers love today was born at early web giants like Google and Facebook. Frameworks for the front make it possible for Facebook, Google, LinkedIn, Pinterest, Airbnb, and others to iterate quickly, scale, deliver consistent fast responsiveness and, in general, mostly delight their users. Indeed, their entire businesses depend on great user experiences. While venture investors historically have plowed their funds into back-end startups creating open source software, the same is not nearly as true with the front-end. Accel, Benchmark, Greylock, and other top-tier VCs made fortunes on backing enterprise open source software startups like Heroku, MuleSoft, Red Hat, and many more.


Migrating to GraphQL at Airbnb

Two GraphQL features Airbnb relied upon during this early stage were aliasing and adapters. Aliasing allowed mapping between camel-case properties returned from GraphQL and snake-case properties of the old REST endpoint. Adapters were used to convert a GraphQL response so that it could be recursively diffed with a REST response, and ensure GraphQL was returning the same data as before. These adapters would later be removed, but they were critical for meeting the parity goals of the first stage. Stage two focuses on propagating types throughout the code, which increases confidence during later stages. At this point, no runtime behavior should be affected. The third stage improves the use of Apollo. Earlier stages directly used the Apollo Client, which fired Redux Actions, and components used the Redux store. Refactoring the app using React Hooks allows use of the Apollo cache instead of the Redux store.  A major benefit of GraphQL is reducing over-fetching.


ASP.NET Core Microservices: Getting Started

Open avocado
Let's consider that we're exploring microservices architecture, and we want to take advantage of polyglot persistence to use a NoSQL database (Couchbase) for a particular use case. For our project, we're going to look at a Database per service pattern, and use Docker (docker-compose) to manage the database for the ASP.NET Core Microservices proof of concept. This blog post will be using Couchbase Server, but you can apply the basics here to the other databases in your microservices architecture as well. I'm using ASP.NET Core because it's a cross-platform, open-source framework. Additionally, Visual Studio (while not required) will give us a few helpful tools for working with Docker and docker-compose. But again, you can apply the basics here to any web framework or programming language of your choice. I'll be using Visual Studio for this blog post, but you can achieve the same effect (with perhaps a little more work) in Visual Studio Code or plain old command line.


Amazon Just Joined The Race To Dominate Quantum Computing In The Cloud

People pass by AWS (Amazon Web Services) stand during the...
AWS is something of a latecomer to the quantum cloud. IBM kicked off the trend several years ago, and since then a wave of other companies have unveiled cloud-based offerings, including Amazon’s partners D-Wave and Rigetti. Nor is AWS the first cloud provider to offer access to a range of other companies’ quantum hardware: Microsoft took that honor when it launched its Azure Quantum cloud offering last month. Yet AWS is likely to become a force to be reckoned with in the field because of a unique advantage it has over its rivals. ... AWS became a cloud powerhouse because many of the services it now offers were initially developed for Amazon’s vast commercial empire. The same scenario could well play out with quantum computing. For instance, one of the things quantum machines are particularly good at is optimizing delivery routes. AWS could—quite literally—road test a quantum-powered service that lets Amazon plot the most efficient directions for its delivery vehicles to take as they drop off parcels. The machines could also help Amazon optimize the way goods flow through its vast warehouse network.


Simplifying data management in the cloud

Simplifying data management in the cloud
Attempting to leverage the approaches and tools we use today will add complexity until the systems eventually collapse from the weight of it. Just think of the number of tools in your data center today that cause you to ask “what were they thinking?” Indeed, they were thinking much the same way we’re thinking today, including looking for tactical solutions that will eventually not provide the value they once did—and in some cases providing negative value.  I’ve come a long way to make a pitch to you, but as I’m thinking about how we solve this issue, an approach seems to pop up over and over as the best likely solution. Indeed, it’s been kicked around in different academic circles. It’s the notion of self-identifying data. I’ll likely hit this topic again at some point, but here’s the idea: Take the autonomous data concept a few steps further by embedding more intelligence with the data and more knowledge about the data itself. We would gain the ability to have all knowledge around the use of the data available by the data itself, no matter where it’s stored, or where the information is requested.


Survey: IT pros see career potential in as-a-Service trend

IT pros over 55 are most concerned with data complexity slowing down future data migrations. One question in the survey suggests that instead of tearing down data silos, cloud migration projects may create new ones. Seventy-seven percent of respondents saythat data is siloed between public and private clouds. Miller said to avoid this organizations need to choose the aaS model that makes the most business and policy sense. "Companies need to adopt a model that is not tied to one cloud or one premise but has the flexibility to move data and applications to where business needs are best met," he said. "If you adopt the right aaS model, you're breaking down the silos and driving overall efficiencies." While the majority of companies state that they have implemented at least some aaS projects, 66% of respondents say that IT pros avoid this new way of working out of fear of losing their jobs. The younger respondents (ages 22 to34) were most likely to think this at 70%, compared to 67% of 35 to 54yearolds and only 45% of 55+ year-olds.



Quote for the day:


"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell


Daily Tech Digest - December 03, 2019

Insider risk management – who’s the boss?

staring at the boss
The CRO may be the best person to lead the ITP. This largely depends, however, on the scope and role of the CRO itself. Some CROs focus only on the strategic risk of the company. They set organizational risk tolerances and may develop methodologies for capturing and measuring risk postures. In this model, the operational risk is still wholly “owned” by the operational leaders (CSO, CISO, business units, etc.). CROs that fall into this category are not well positioned to lead an ITP because they lack the visibility and operational granularity required for an ITP. Other CROs, however, focus on both strategic and operational risk of the company. They not only set organizational risk tolerances, but also are involved in measuring, managing, and improving the operational risk posture of the organization. CROs in this group are well positioned to lead the ITP. They will often have the necessary high-level authority (report to CEO, Audit Committee, etc.) and by virtue of their scope, will also have the necessary relationships across all functions of the organization (business units, legal, HR, CSO, CISO, etc.).



Redgate’s journey to DevOps

While Redgate had a culture that was favorable towards DevOps, introducing it was a different story. The software development teams were eager to move to the shorter development cycles and continuous iteration of development and testing that DevOps promotes, but new Agile processes and practices had to be adopted to make it happen. The question was, which processes and practices? Scrums? Kanban boards? A3s? Standups? Burndown charts? The Deming Cycle? Monthly releases? Weekly releases? Pair programming? Mob programming? Extreme programming? Trunk-based development? Continuous delivery or continuous deployments? As you can see, there are many aspects to Agile so the first job was to understand them and see which could – and should – be implemented at Redgate. In 2008, the first project to use Scrum began at Redgate. The Agile technique breaks down work into goals that can be completed within a fixed time period of one month or two weeks. At the end of each of these sprints, the ideal is to have software ready to release.


Why you need to pay more attention to combatting AI bias


While managing AI-driven functions within an enterprise can be valuable, it can also present challenges, the DataRobot report said. "Not all AI is treated equal, and without the proper knowledge or resources, companies could select or deploy AI in ways that could be more detrimental than beneficial." The survey found that more than a third (38%) of AI professionals still use black-box AI systems--meaning they have little to no visibility into how the data inputs into their AI solutions are being used. This lack of visibility could contribute to respondents' concerns about AI bias occurring within their organization, DataRobot said. AI bias is occurring because "we are making decisions on incomplete data in familiar retrieval systems,'' said Sue Feldman, president of the cognitive computing and content analytics consultancy Synthexis. "Algorithms all make assumptions about the world and the priorities of the user. That means that unless you understand these assumptions, you will still be flying blind." This is why it is important to use systems that include humans in the loop, instead of making decisions in a vacuum, added Feldman, who is also co-founder and managing director of the Cognitive Computing Consortium. They are "an improvement over completely automatic systems," she said.



How to Integrate Infosec and DevOps Using Chaos Engineering

D.I.E. is an acronym where D is for distributed, meaning that service outages, like a denial of service, are less impactful. I is for immutable, meaning that changes are more comfortable to detect in reverse. And E is for ephemeral, where users try to reduce the value of assets as close to zero from the attackers' perspective. These system properties are what chaos security principles will help to build secure systems by design. Starting with the expectation that security controls will fail, and organizations must prepare accordingly. Then, embrace the ability to respond to security incidents instead of avoiding them. Shortridge recommended using game days to practice potentially risk scenarios in a safe environment. Moreover, she recommends using production-like environments to have a better understanding of how things will work in a complex system. Also, Shortridge recommends starting with simple testing before moving on to more sophisticated testing. For instance, build tests that users can run effectively with accessible scenarios, something like phishing or SQL injections.


RT? – Making Sense of High Availability

https://mathequality.files.wordpress.com/2014/01/math-meme-math-test-easy-or-wrong.png
Monitoring is the cornerstone of your RTO target. If you don’t know there is a problem, you can’t find it. Many blogs and articles will focus on the next 3 parts, but let’s be honest, if you don’t know there’s a problem, you can’t respond. If your logs operate on a 5-minute delay, then you need to factor in the 5 minutes into your RTO. From there the next piece is response time. And I mean this in the true sense of how quickly can you trigger a failover to your DR state. How quickly can you triage the problem and respond to the situation? The best RTO targets leverage as much automation as possible here. Next, by looking at data replication, we can ensure that we are able to bring back up any data stores quickly and maintain continuity of business. This is important because every time we have to restore a data store, that takes time and pulls out our RTO. If you can failover in 2 minutes it doesn’t do you much good if it takes 20 minutes to get the database up. Finally, failover. If you are in a state where you need to failover, how long does that take and what automation and steps can you take to shorten that time significantly.


Working with Identity Server 4

Identity Server 4 is the tool of choice for getting bearer JSON web tokens (JWT) in .NET. The tool comes in a NuGet package that can fit in any ASP.NET project. Identity Server 4 is an implementation of the OAuth 2.0 spec and supports standard flows. The library is extensible to support parts of the spec that are still in draft. Bearer JWT tokens are preferable to authenticate requests with a backend API. The JWT is stateless and aids in decoupling software modules. The JWT itself is not tied to the user session and works well in a distributed system. This reduces friction between modules since it does not share dependencies like a user session. In this take, I’ll delve deep into Identity Server 4. This OAuth implementation is fully compatible with the spec. I’ll start from scratch with an ASP.NET Web API project using .Net Core. I’ll stick to the recommended version of .NET Core, which is 3.0.100 at the time of this writing. You can find a working sample of the code here. To begin, I’ll use CLI tools to keep the focus on the code without visual aids from Visual Studio.


The IT4IT standard was conceived of more than eight years ago by a small group of European companies that saw the need for normative guidance to direct functionality and interoperability for large, multi-vendor IT management software portfolios. Each had tried to create a tool orchestration and interoperability architecture themselves, at great cost. Lesson learned: Their solutions were very similar and, in fact, just the kind of thing that should be a general solution or standard, not proprietary or unique to one company. Supported by HP Software, they worked together as a consortium to merge their individual efforts into a common model that could stand as a universally available normative standard for the industry. This effort resulted in IT4IT version 1.0. At that point the IP was donated to The Open Group, an organization known for its management of several industry standards such as UNIX, TOGAF and others. The private consortium became the IT4IT Forum and their architecture evolved into the publicly available IT4IT Reference Architecture standard.


Menlo Security CEO on what small companies should know about cybersecurity

We've seen two things happen. One, probably over the last 10 years — security budgets have probably tripled, if not more. So security has become much more front of mind for the CIO and boards as we keep reading about these high-profile breaches that end up causing a lot of damage and reputation loss for the companies that were breached. And in that same timeframe that budgets have gone 3X, I would say that the number of infections has probably risen by a factor of three as well, if not more. And that's counterintuitive, because normally the more you invest in a certain solution set, the better results you get. So the fact that it's not working is, I'd say, kind of the big challenge — and people miss that. They keep investing in the same concepts, the same solutions, the same vendors. ... There wasn't a great understanding of just how bad the threat could be. But I think we've seen enough cyber incidents in the headlines, including some high-profile events like affected our U.S. elections and various things like that.


New Android bug targets banking apps on Google Play store

As Promon describes it, StrandHogg allows a malicious app masquerading as a legitimate one to ask for certain permissions, including access to SMS messages, photos, GPS, and the microphone. Unsuspecting users approve the requests, thinking they're granting permission to a legitimate app and not one that's fraudulent and malicious. When the user enters the login credentials within the app, that information is immediately sent to the attacker, who can then sign in and control sensitive apps. The vulnerability itself lies in the multitasking system of Android, Promon's marketing and communication director, Lars Lunde Birkeland, said. The exploit is based on an Android control setting called "taskAffinity," which allows any app, including malicious ones, to freely assume any identity in the multitasking system, Birkeland said. A specific malware sample analyzed by Promon was not on Google Play but was instead installed through dropper apps and hostile downloaders available on Google's mobile app store, according to Promon. Such apps either have or pretend to have the features of games, utilities, and other popular apps but actually install additional apps that can deploy malware or steal user data.



Traditionally a threat actor might take over an email account and send a message internally about making a wire transfer or deposit to some “new vendor.” As BEC became more popular over the last few years, criminals recognized they could add legitimacy to their phony calls-to-action by sending them from an actual vendor’s account, resulting in what’s being called Vendor Email Compromise. The first step is hijacking a corporate account; the second is re-routing funds from that organization’s customers into criminal-controlled accounts, under the guise of a transaction problem or account change. Enterprises can empower suppliers to prevent this fraud and associated damages. Sharing account exposure data directly with suppliers through your vendor risk management solution is the most efficient way to convey a sense of urgency for remediating the issues that put you both at risk, and seeing their actual risk data points their security team in the right direction. Alternatively, security teams can regularly check recovered breach data for email addresses connected to their suppliers, and share that information manually with them, though this could quickly become quite cumbersome.



Quote for the day:


"Making good decisions is a crucial skill at every level." -- Peter Drucker


Daily Tech Digest - December 02, 2019

Project Cortex: Microsoft aims to shake up knowledge management


Project Cortex isn't only for Office documents. Using Azure's Cognitive Services, it can use image and text recognition to work with scanned content, images, and other file formats such as PDF. It can even use rules to define form structures, so that key information can be extracted from scanned forms and other common document types, allowing you to build a model of where projects are spending money by parsing purchase orders and invoices. Extracted information is used as metadata to provide context around documents, helping users find the content they need. You're not limited to structured document types. Another Azure Cognitive Service, LUIS, forms the basis of Project Cortex's Machine Teaching. Here you can build new document models that look for key terms, allowing classification of, say, contracts which will differ from contract to contract, with different content and different formatting. Once a model is trained it can be used across your entire document store, improving search and increasing your organisation's underlying knowledge model.



Microsoft: We're creating a new Rust-based programming language for secure coding


The company recently revealed that its trials with Rust over C and C++ to remove insecure code from Windows had hit its targets. But why did Microsoft do this? The company has partially explained its security-related motives for experimenting with Rust, but hasn't gone into much detail about the reasons for its move. All Windows users know that on the second Tuesday every month, Microsoft releases patches to address security flaws in Windows. Microsoft recently revealed that the vast majority of bugs being discovered these days are memory safety flaws, which is also why Microsoft is looking at Rust to improve the situation. Rust was designed to allow developers to code without having to worry about this class of bug. 'Memory safety' is the term for coding frameworks that help protect memory space from being abused by malware. Project Verona at Microsoft is meant to progress the company's work here to close off this attack vector. Microsoft's Project Verona could turn out to be just an experiment that leads nowhere, but the company has progressed far enough to have detailed some of its ideas through the UK-based non-profit Knowledge Transfer Network.


KPMG Launches Blockchain Platform, KPMG Origins 

KPMG Launches Blockchain Platform, KPMG Origins
The platform has been developed to enable global trade. It brings together a number of emerging technologies including blockchain, internet of things sensors (IoT), as well as data and analytics tools to provide transparency and traceability to trading partners across complex industries. KPMG Origins allows these trading partners to communicate unique product information across their supply chains, and in particular to end users, while reducing operational complexities. Laszlo Peter, KPMG Head of Blockchain Services for Asia Pacific, said: “KPMG Origins is the result of several successful initial trials with clients to understand industry pain and trust points, map incentive structures, and create a platform to add real value. To move beyond the hype, it is necessary to introduce complex technology across a diverse set of corporate stakeholders. The platform is based upon in-depth work across highly specialised areas, as well as collaboration across multiple jurisdictions to deliver a multi-lingual, standards and taxonomy driven platform that accelerates the development of distributed ecosystems.”


FinTech’s Opportunity in the Coming Recession


Historically, secured credit cards have been among the most prominent solutions for people who are new to credit or have poor credit history. But secured credit cards typically require an upfront deposit, as much as $500, which can be prohibitive for the very people who need such a tool to improve their credit. The solution to helping consumers build credit without an upfront security deposit is to offer more of an installment plan, using equity from a credit builder loan as a deposit for a secured card and on-time payment history in lieu of a hard inquiry. The tool itself is not new – credit builder loans have existed in credit unions for 40-50 years. But many people are unaware of this offering and do not have the tools to use it; FinTechs provide a delivery model that reaches and resonates with today’s tech and mobile-savvy consumers, particularly Millennials. Instead of taking time away from one of several jobs (44 percent of workers aged 25-34 report taking additional jobs to make ends meet) to go to a physical bank during business hours, borrowers are empowered to manage their finances directly from their phones at any time, day or night.


Scientists developed a new AI framework to prevent machines from misbehaving

Scientists developed a new AI framework to prevent machines from misbehaving
The framework uses ‘Seldonian’ algorithms, named for the protagonist of Isaac Asimov’s “Foundation” series, a continuation of the fictional universe where the author’s “Laws of Robotics” first appeared. According to the team’s research, the Seldonian architecture allows developers to define their own operating conditions in order to prevent systems from crossing certain thresholds while training or optimizing. In essence, this should allow developers to keep AI systems from harming or discriminating against humans. Deep learning systems power everything from facial recognition to stock market predictions. In most cases, such as image recognition, it doesn’t really matter how the machines come to their conclusions as long as they’re correct. If an AI can identify cats with 90 percent accuracy, we’d probably consider that successful. But when it comes matters of more importance, such as algorithms that predict recidivism or AI that automates medication dosing, there’s little to no margin for error.


The Evolution of Lean Thinking - Transitioning from Lean Thinking to FLOW Thinking


The Flow System™ is not a new Agile or Lean framework. Indeed, it is not a framework at all, and it’s certainly not a one-size-fits-all solution. What is presented is a system of understanding, a system of learning. Many project management methods and agile frameworks concentrate on taskwork and planning with no regard to how an organization is structured to support these activities, seeing them simply as a linear progression of tasks. Scaling frameworks tend to struggle or simply not work as they do not recognize that they are operating in a complex adaptive system which can only scale through continuous decomposition and recombination, which they are unable to do with their rigid doctrines. Organizations and institutions utilize teams but fall short of developing teamwork skills and fail to restructure leadership to maximize the benefits that can be obtained from the utilization of teams. These shortcomings introduce additional constraints and barriers that prevent organizations and institutions from achieving a state of flow.


What’s Holding Back Data-Driven Healthcare?

Moorfields Eye Hospital scan
Healthcare is definitely a data-rich sector, so scarcity of information is not a problem – and the NHS database is particularly valuable with respect to other countries, since it has comprehensive records that go back decades. However, access to health data is often very difficult from a regulatory point of view, and there are extreme differences in terms of quality and accessibility. Typically, health data is messy, disperse and often siloed in a multitude of medical imaging archival systems, pathology systems, EHRs, electronic prescribing tools and insurance databases. While things are moving in the right direction, i.e., with the development of unified data formats such as Fast Healthcare Interoperability Resources, there is no easy and quick fix. No fancy algorithm can be developed without proper data collection and cleaning – and in many cases, this phase can take months. Until companies keep reinventing the wheel and developing their own internal tools for data cleaning with huge costs in terms of time and money, progress will be slow.


3 Modern Myths of Threat Intelligence

Many organizations don't know how to gain value from threat intelligence, and intelligence — cyber or not — doesn't help people who aren't willing to help themselves. If someone tells you that thieves are planning to rob your house tonight, what steps would you take to try to prevent it? You could lock the doors, hide your valuables, and maybe stay at a friend's house. However, none of that would guarantee that the crime wouldn't happen. I've noticed that organizations don't truly understand what it means to be "agile" when acting on threat intelligence. In my experience, an agile security team rapidly operationalizes and incorporates intelligence into detection processes, and deploys tools that work quickly to deliver detection. If you learn that a group is planning to hack your systems using a certain method, but you can't adjust your infrastructure or existing controls to defend against that method, intelligence is wasted. You are only as secure as the next steps you take after learning about a threat — and if you take them in the time you have before it hits.


IoT growth set to come from managed data analytics


According to CompTIA’s end-user data, there is a very slow technology adoption curve across various new trends, with only IoT and AI reaching critical mass. “Even amid all the hype, companies in the business of technology are starting to pull back on adopting new technology as part of their portfolio,” CompTIA noted in its IT industry outlook 2020 report. “This slight tap on the brakes suggests that classic situation where companies move too quickly into a new technology discipline or business model, only to have a reality check in year two or three.” CompTIA’s research also found that small and medium-sized businesses are struggling to integrate the various platforms, applications and data they need. While large businesses are able to use internal resources for integration, CompTIA noted that companies of all sizes may outsource to third parties for integration activities


Blockchain must overcome hurdles before becoming a mainstream technology


We like blockchain. At least, that's the takeaway from a recent TechRepublic Premium survey where the majority of respondents (87%) stated that blockchain will have a 'positive' effect on their industry, and 27% indicated a 'very positive' effect. However, thinking something and actually doing it are two different actions. Despite the enthusiasm for the technology, only 10% of those respondents actively use blockchain at their company. Blockchain appears on 13% of the strategic roadmaps for respondents' organizations, compared to 7% in 2018. Which industries will blockchain most likely impact? IT and technology was chosen by 58% of respondents, with professional services -- including finance, insurance, legal, and consulting -- a close second at 56%. Rounding out the top five cited industries were logistics & transport (45%), healthcare (41%), and retail & wholesale (37%). What needs to happen for the widespread adoption of blockchain? Two-thirds of respondents (66%) indicated the need for a clearly-stated business use case. A cryptocurrency operated by a government entity was suggested by 35% of respondents, while a company-controlled cryptocurrency was favored by 20%.



Quote for the day:


"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson


Daily Tech Digest - December 01, 2019

Data Scientists: Machine Learning Skills are Key to Future Jobs


SlashData queried some 20,500 respondents from 167 countries, which means this is a pretty comprehensive survey from a global perspective. Responses were additionally weighted in order to “derive a representative distribution for platforms, segments, and types of IoT [projects],” according to the report accompanying the data. According to the survey, some 45 percent of developers want to either learn or improve their existing data science/machine learning skills. This outpaces the desire to learn UI design (33 percent of respondents), cloud native development such as containers (25 percent), project management (24 percent), and DevOps (23 percent). “The analysis of very large datasets is now made possible and, more importantly, affordable to most due to the emergence of cloud computing, open-source data science frameworks and Machine Learning as a Service (MLaaS) platforms,” the report added. “As a result, the interest of the developer community in the field is growing steadily.”



Did You Forget the Ops in DevOps?


This person with deep operational knowledge was "too busy" fighting fires in production environments, and had not been included in the devops transformation conversations for this large organization. He worked for a different legal entity in a different building, despite being part of the same group, and he was about to leave due to lack of motivation. Yet the organization was claiming to do "devops". The action we took in this case was to take offline a number of experts who were effectively bottlenecks to the flow of work (if you’ve read the book "The Phoenix Project" you will recognize the "Brent" character here). We asked them to build the new components they needed with infrastructure-as-code under a Scrum approach. We even took them to a different city so they wouldn't get disturbed by their regular coworkers. After a couple of months, they rejoined their previous teams but now had a totally new approach of working. Even the oldest Unix sysadmin had now become an agile evangelist that preached infrastructure as code rather than manually hot fixing production.


Is your approach to enterprise architecture relevant in today’s world?

Is your approach to enterprise architecture relevant in today’s world?
In today’s fast-changing market, the role of enterprise architecture is more important than ever to prevent organisations from creating barriers to future change or expensive technical debt. To remain relevant, modern enterprise architecture approaches must be customer experience (CX)-driven, agile, and deliver the right level of detail just in time for when it needs to be consumed. Static business capabilities are no longer the only anchor point for architecting enterprise technology environments. CX is now a dominant driver of strategy and so businesses need to understand how stakeholders (customers, employees, partners, etc.) consume services and how they can be enabled by technology and platforms. The importance of capturing, managing, analysing and exposing data grows each year. Therefore, enterprise architecture needs to reinvent itself again to incorporate the needs of a rapidly evolving digital world. In a CX-driven planning approach, customer journeys are used to define the services and channels of engagement.


Edge Computing – Key Drivers and Benefits for Smart Manufacturing

Edge Computing – Key Drivers and Benefits for Smart Manufacturing
Edge computing means faster response times, increased reliability and security. A lot has been said about how the Internet of Things ( IoT ) is revolutionizing the manufacturing world. Many studies have already predicted more than 50 billion devices will be connected by 2020. It is also expected over 1.44 billion data points will be collected per plant per day. This data will be aggregated, sanitized, processed, and used for critical business decisions. This means unprecedented demand and expectations on connectivity, computational power, and speed of quality of service. Can we afford any latency in critical operations such as operator hand trapped in a rotor, fire situation, or gas leakage? This is the biggest driver for edge computing. More power closer to the data source-the “Thing” in IoT. Rather than a conventional central controlling system, this distributed control architecture is gaining popularity as an alternative to the light version of data center and where control functions are placed closer to the devices.


63% Of Executives Say AI Leads To Increased Revenues And 44% Report Reduced Costs

745,705 autonomous-ready vehicles will ship worldwide in 2023 according to Gartner
The McKinsey global survey found a nearly 25% year-over-year increase in the use of AI in standard business processes, with a sizable jump from the past year in companies using AI across multiple areas of their business; 58% of executives surveyed report that their organizations have embedded at least one AI capability into a process or product in at least one function or business unit, up from 47% in 2018; retail has seen the largest increase in AI use, with 60% of respondents saying their companies have embedded at least one AI capability in one or more functions or business units, a 35-percentage point increase from 2018; 74% of respondents whose companies have adopted or plan to adopt AI say their organizations will increase their AI investment in the next three years; 41% say their organizations comprehensively identify and prioritize their AI risks, citing most often cybersecurity and regulatory compliance. 84% of C-suite executives believe they must leverage AI to achieve their growth objectives, yet 76% report they struggle with how to scale AI;


How Europe’s AI ecosystem could catch up with China and the U.S.

McKinsey senior
Europe edges out the U.S. in total number of software developers (5.7 million to 4.4 million), and venture capital spending in Europe continues to rise to historically high levels. Even so, the U.S. and China beat Europe in venture capital spending, startup growth, and R&D spending. The U.S. also outpaces Europe in AI, big data, and quantum computing patents. A Center for Data Innovation study released last month also concluded that the U.S. is in the lead, followed by China, with Europe lagging behind. Multiple surveys of business executives have found that businesses around the world are struggling to scale the use of AI, but European firms trail major U.S. companies in this metric too, with the exception of smart robotics companies. This trend could be in part due to lower levels of data digitization, Bughin said. About 3-4% of businesses surveyed by McKinsey were found to be using AI at scale. The majority of those are digital native companies, he said, but 38% of major companies in the U.S. are digital natives compared to 24% in Europe.


Singapore government must realise human error also a security breach

Singapore must be tougher on firms that treat security as value-add service
More importantly, before dismissing man-made mistakes as "not a security risk", organisations such as the SAC need to consider the stats. "Inadvertent" breaches brought about by human error and system glitches accounted for 49% of data breaches, according to an IBM Security report conducted by Ponemon Institute, which estimated that human errors alone cost companies $3.5 million. In fact, cybersecurity vendor Kaspersky described employees as a major hole in an organisation's fight against cyber attacks. Some 52% viewed their staff as the biggest weakness in IT security, where their careless actions put the company's security strategy at risk. It added that 47% of businesses were concerned most about employees sharing inappropriate data via mobile devices, while careless or uninformed staff were the second-most likely cause of a serious security breach--second only to malware. Some 46% of cybersecurity incidents in the past year were attributed to careless or uninformed staff. Kaspersky further described human error on the part of staff as the "attack vector" that businesses were falling victim to.


6-essential practices to successfully implement machine learning solutions


Here’s a golden rule to remember: a machine learning algorithm is only as good as the data it’s fed. So, to use machine learning effectively, you must have the right data for the problem you’re trying to solve. And not just a few data points. Machines need a lot of data to learn — think hundreds of thousands of data points. Your data will need to be formatted, cleaned, and organized for your algorithm, and you will need two datasets: one to train the model and one to evaluate its performance. So after picking up the use cases, filter out the ones where there is data available and the ones that can quickly generate value across the board. Go for multiple smaller wins and have a clear data strategy. ... With a worldwide shortage of trained data scientists, you need to empower your data analytics professionals and other domain information experts with the tools and support they need to become citizen data scientists.


The hardest part of AI & analytics is not AI - it’s data management

The hardest part of AI & analytics is not AI, it’s data management image
“This is going to enable organisations to train their AI and ML algorithms with a more complete, more comprehensive and less biased sets of data.” According to Hanson, this can be done by using good data engineering tools with AI built-in. “What we actually need is not just artificial intelligence in the analytics layer — in terms of generating graphical views of data and making decisions in real-time around data — we need to make sure that we’ve got artificial intelligence in the backend to ensure we’ve got well-curated data going into our analytics engines.” He warned that if organisations fail to do this, they won’t see the benefit of analytical AI going forward. “In my opinion, a lot of mistakes could be made, some serious mistakes, if we don’t make sure that we train our analytical AI with high quality, well-curated data,” said Hanson. He added, if the data sets aren’t good, then AI advocates in organisations are not going to get the results they expect. This could hinder any future investment in the technology.


How to Advance Your Enterprise Risk Management Maturity

close up of bottom of a skateboarder's sneaker, in the middle of pushing skateboard forward
Before you can determine whether you want to advance your ERM maturity, you must first define your appetite for risk to make a proper assessment. Not all companies require the same level of risk maturity. In fact, the highest level of maturity does not necessarily equal the best ERM program. Rather than immediately aiming for the highest level of maturity, companies need to take a step back and identify their priorities to understand what is best for their organization’s specific circumstances. ... Effective risk culture is one that empowers business functions to be intellectually honest about the risks they face and encourages them to align risks with strategic objectives. To accomplish this, companies must remain patient. Changing a culture of any sized organization takes time and is not something that can be done by any single meeting or memo to the staff. It takes time to educate team members properly and for leaders to demonstrate the importance of the change. ... Once you determine who should hold primary responsibility for the risk management program and have received the necessary buy-in, you will need to measure your progress towards greater ERM maturity. One way to measure progress is to compare yourself to your peers.



Quote for the day:


"The science of today is the technology of tomorrow." -- Edward Teller