Daily Tech Digest - December 03, 2019

Insider risk management – who’s the boss?

staring at the boss
The CRO may be the best person to lead the ITP. This largely depends, however, on the scope and role of the CRO itself. Some CROs focus only on the strategic risk of the company. They set organizational risk tolerances and may develop methodologies for capturing and measuring risk postures. In this model, the operational risk is still wholly “owned” by the operational leaders (CSO, CISO, business units, etc.). CROs that fall into this category are not well positioned to lead an ITP because they lack the visibility and operational granularity required for an ITP. Other CROs, however, focus on both strategic and operational risk of the company. They not only set organizational risk tolerances, but also are involved in measuring, managing, and improving the operational risk posture of the organization. CROs in this group are well positioned to lead the ITP. They will often have the necessary high-level authority (report to CEO, Audit Committee, etc.) and by virtue of their scope, will also have the necessary relationships across all functions of the organization (business units, legal, HR, CSO, CISO, etc.).



Redgate’s journey to DevOps

While Redgate had a culture that was favorable towards DevOps, introducing it was a different story. The software development teams were eager to move to the shorter development cycles and continuous iteration of development and testing that DevOps promotes, but new Agile processes and practices had to be adopted to make it happen. The question was, which processes and practices? Scrums? Kanban boards? A3s? Standups? Burndown charts? The Deming Cycle? Monthly releases? Weekly releases? Pair programming? Mob programming? Extreme programming? Trunk-based development? Continuous delivery or continuous deployments? As you can see, there are many aspects to Agile so the first job was to understand them and see which could – and should – be implemented at Redgate. In 2008, the first project to use Scrum began at Redgate. The Agile technique breaks down work into goals that can be completed within a fixed time period of one month or two weeks. At the end of each of these sprints, the ideal is to have software ready to release.


Why you need to pay more attention to combatting AI bias


While managing AI-driven functions within an enterprise can be valuable, it can also present challenges, the DataRobot report said. "Not all AI is treated equal, and without the proper knowledge or resources, companies could select or deploy AI in ways that could be more detrimental than beneficial." The survey found that more than a third (38%) of AI professionals still use black-box AI systems--meaning they have little to no visibility into how the data inputs into their AI solutions are being used. This lack of visibility could contribute to respondents' concerns about AI bias occurring within their organization, DataRobot said. AI bias is occurring because "we are making decisions on incomplete data in familiar retrieval systems,'' said Sue Feldman, president of the cognitive computing and content analytics consultancy Synthexis. "Algorithms all make assumptions about the world and the priorities of the user. That means that unless you understand these assumptions, you will still be flying blind." This is why it is important to use systems that include humans in the loop, instead of making decisions in a vacuum, added Feldman, who is also co-founder and managing director of the Cognitive Computing Consortium. They are "an improvement over completely automatic systems," she said.



How to Integrate Infosec and DevOps Using Chaos Engineering

D.I.E. is an acronym where D is for distributed, meaning that service outages, like a denial of service, are less impactful. I is for immutable, meaning that changes are more comfortable to detect in reverse. And E is for ephemeral, where users try to reduce the value of assets as close to zero from the attackers' perspective. These system properties are what chaos security principles will help to build secure systems by design. Starting with the expectation that security controls will fail, and organizations must prepare accordingly. Then, embrace the ability to respond to security incidents instead of avoiding them. Shortridge recommended using game days to practice potentially risk scenarios in a safe environment. Moreover, she recommends using production-like environments to have a better understanding of how things will work in a complex system. Also, Shortridge recommends starting with simple testing before moving on to more sophisticated testing. For instance, build tests that users can run effectively with accessible scenarios, something like phishing or SQL injections.


RT? – Making Sense of High Availability

https://mathequality.files.wordpress.com/2014/01/math-meme-math-test-easy-or-wrong.png
Monitoring is the cornerstone of your RTO target. If you don’t know there is a problem, you can’t find it. Many blogs and articles will focus on the next 3 parts, but let’s be honest, if you don’t know there’s a problem, you can’t respond. If your logs operate on a 5-minute delay, then you need to factor in the 5 minutes into your RTO. From there the next piece is response time. And I mean this in the true sense of how quickly can you trigger a failover to your DR state. How quickly can you triage the problem and respond to the situation? The best RTO targets leverage as much automation as possible here. Next, by looking at data replication, we can ensure that we are able to bring back up any data stores quickly and maintain continuity of business. This is important because every time we have to restore a data store, that takes time and pulls out our RTO. If you can failover in 2 minutes it doesn’t do you much good if it takes 20 minutes to get the database up. Finally, failover. If you are in a state where you need to failover, how long does that take and what automation and steps can you take to shorten that time significantly.


Working with Identity Server 4

Identity Server 4 is the tool of choice for getting bearer JSON web tokens (JWT) in .NET. The tool comes in a NuGet package that can fit in any ASP.NET project. Identity Server 4 is an implementation of the OAuth 2.0 spec and supports standard flows. The library is extensible to support parts of the spec that are still in draft. Bearer JWT tokens are preferable to authenticate requests with a backend API. The JWT is stateless and aids in decoupling software modules. The JWT itself is not tied to the user session and works well in a distributed system. This reduces friction between modules since it does not share dependencies like a user session. In this take, I’ll delve deep into Identity Server 4. This OAuth implementation is fully compatible with the spec. I’ll start from scratch with an ASP.NET Web API project using .Net Core. I’ll stick to the recommended version of .NET Core, which is 3.0.100 at the time of this writing. You can find a working sample of the code here. To begin, I’ll use CLI tools to keep the focus on the code without visual aids from Visual Studio.


The IT4IT standard was conceived of more than eight years ago by a small group of European companies that saw the need for normative guidance to direct functionality and interoperability for large, multi-vendor IT management software portfolios. Each had tried to create a tool orchestration and interoperability architecture themselves, at great cost. Lesson learned: Their solutions were very similar and, in fact, just the kind of thing that should be a general solution or standard, not proprietary or unique to one company. Supported by HP Software, they worked together as a consortium to merge their individual efforts into a common model that could stand as a universally available normative standard for the industry. This effort resulted in IT4IT version 1.0. At that point the IP was donated to The Open Group, an organization known for its management of several industry standards such as UNIX, TOGAF and others. The private consortium became the IT4IT Forum and their architecture evolved into the publicly available IT4IT Reference Architecture standard.


Menlo Security CEO on what small companies should know about cybersecurity

We've seen two things happen. One, probably over the last 10 years — security budgets have probably tripled, if not more. So security has become much more front of mind for the CIO and boards as we keep reading about these high-profile breaches that end up causing a lot of damage and reputation loss for the companies that were breached. And in that same timeframe that budgets have gone 3X, I would say that the number of infections has probably risen by a factor of three as well, if not more. And that's counterintuitive, because normally the more you invest in a certain solution set, the better results you get. So the fact that it's not working is, I'd say, kind of the big challenge — and people miss that. They keep investing in the same concepts, the same solutions, the same vendors. ... There wasn't a great understanding of just how bad the threat could be. But I think we've seen enough cyber incidents in the headlines, including some high-profile events like affected our U.S. elections and various things like that.


New Android bug targets banking apps on Google Play store

As Promon describes it, StrandHogg allows a malicious app masquerading as a legitimate one to ask for certain permissions, including access to SMS messages, photos, GPS, and the microphone. Unsuspecting users approve the requests, thinking they're granting permission to a legitimate app and not one that's fraudulent and malicious. When the user enters the login credentials within the app, that information is immediately sent to the attacker, who can then sign in and control sensitive apps. The vulnerability itself lies in the multitasking system of Android, Promon's marketing and communication director, Lars Lunde Birkeland, said. The exploit is based on an Android control setting called "taskAffinity," which allows any app, including malicious ones, to freely assume any identity in the multitasking system, Birkeland said. A specific malware sample analyzed by Promon was not on Google Play but was instead installed through dropper apps and hostile downloaders available on Google's mobile app store, according to Promon. Such apps either have or pretend to have the features of games, utilities, and other popular apps but actually install additional apps that can deploy malware or steal user data.



Traditionally a threat actor might take over an email account and send a message internally about making a wire transfer or deposit to some “new vendor.” As BEC became more popular over the last few years, criminals recognized they could add legitimacy to their phony calls-to-action by sending them from an actual vendor’s account, resulting in what’s being called Vendor Email Compromise. The first step is hijacking a corporate account; the second is re-routing funds from that organization’s customers into criminal-controlled accounts, under the guise of a transaction problem or account change. Enterprises can empower suppliers to prevent this fraud and associated damages. Sharing account exposure data directly with suppliers through your vendor risk management solution is the most efficient way to convey a sense of urgency for remediating the issues that put you both at risk, and seeing their actual risk data points their security team in the right direction. Alternatively, security teams can regularly check recovered breach data for email addresses connected to their suppliers, and share that information manually with them, though this could quickly become quite cumbersome.



Quote for the day:


"Making good decisions is a crucial skill at every level." -- Peter Drucker


Daily Tech Digest - December 02, 2019

Project Cortex: Microsoft aims to shake up knowledge management


Project Cortex isn't only for Office documents. Using Azure's Cognitive Services, it can use image and text recognition to work with scanned content, images, and other file formats such as PDF. It can even use rules to define form structures, so that key information can be extracted from scanned forms and other common document types, allowing you to build a model of where projects are spending money by parsing purchase orders and invoices. Extracted information is used as metadata to provide context around documents, helping users find the content they need. You're not limited to structured document types. Another Azure Cognitive Service, LUIS, forms the basis of Project Cortex's Machine Teaching. Here you can build new document models that look for key terms, allowing classification of, say, contracts which will differ from contract to contract, with different content and different formatting. Once a model is trained it can be used across your entire document store, improving search and increasing your organisation's underlying knowledge model.



Microsoft: We're creating a new Rust-based programming language for secure coding


The company recently revealed that its trials with Rust over C and C++ to remove insecure code from Windows had hit its targets. But why did Microsoft do this? The company has partially explained its security-related motives for experimenting with Rust, but hasn't gone into much detail about the reasons for its move. All Windows users know that on the second Tuesday every month, Microsoft releases patches to address security flaws in Windows. Microsoft recently revealed that the vast majority of bugs being discovered these days are memory safety flaws, which is also why Microsoft is looking at Rust to improve the situation. Rust was designed to allow developers to code without having to worry about this class of bug. 'Memory safety' is the term for coding frameworks that help protect memory space from being abused by malware. Project Verona at Microsoft is meant to progress the company's work here to close off this attack vector. Microsoft's Project Verona could turn out to be just an experiment that leads nowhere, but the company has progressed far enough to have detailed some of its ideas through the UK-based non-profit Knowledge Transfer Network.


KPMG Launches Blockchain Platform, KPMG Origins 

KPMG Launches Blockchain Platform, KPMG Origins
The platform has been developed to enable global trade. It brings together a number of emerging technologies including blockchain, internet of things sensors (IoT), as well as data and analytics tools to provide transparency and traceability to trading partners across complex industries. KPMG Origins allows these trading partners to communicate unique product information across their supply chains, and in particular to end users, while reducing operational complexities. Laszlo Peter, KPMG Head of Blockchain Services for Asia Pacific, said: “KPMG Origins is the result of several successful initial trials with clients to understand industry pain and trust points, map incentive structures, and create a platform to add real value. To move beyond the hype, it is necessary to introduce complex technology across a diverse set of corporate stakeholders. The platform is based upon in-depth work across highly specialised areas, as well as collaboration across multiple jurisdictions to deliver a multi-lingual, standards and taxonomy driven platform that accelerates the development of distributed ecosystems.”


FinTech’s Opportunity in the Coming Recession


Historically, secured credit cards have been among the most prominent solutions for people who are new to credit or have poor credit history. But secured credit cards typically require an upfront deposit, as much as $500, which can be prohibitive for the very people who need such a tool to improve their credit. The solution to helping consumers build credit without an upfront security deposit is to offer more of an installment plan, using equity from a credit builder loan as a deposit for a secured card and on-time payment history in lieu of a hard inquiry. The tool itself is not new – credit builder loans have existed in credit unions for 40-50 years. But many people are unaware of this offering and do not have the tools to use it; FinTechs provide a delivery model that reaches and resonates with today’s tech and mobile-savvy consumers, particularly Millennials. Instead of taking time away from one of several jobs (44 percent of workers aged 25-34 report taking additional jobs to make ends meet) to go to a physical bank during business hours, borrowers are empowered to manage their finances directly from their phones at any time, day or night.


Scientists developed a new AI framework to prevent machines from misbehaving

Scientists developed a new AI framework to prevent machines from misbehaving
The framework uses ‘Seldonian’ algorithms, named for the protagonist of Isaac Asimov’s “Foundation” series, a continuation of the fictional universe where the author’s “Laws of Robotics” first appeared. According to the team’s research, the Seldonian architecture allows developers to define their own operating conditions in order to prevent systems from crossing certain thresholds while training or optimizing. In essence, this should allow developers to keep AI systems from harming or discriminating against humans. Deep learning systems power everything from facial recognition to stock market predictions. In most cases, such as image recognition, it doesn’t really matter how the machines come to their conclusions as long as they’re correct. If an AI can identify cats with 90 percent accuracy, we’d probably consider that successful. But when it comes matters of more importance, such as algorithms that predict recidivism or AI that automates medication dosing, there’s little to no margin for error.


The Evolution of Lean Thinking - Transitioning from Lean Thinking to FLOW Thinking


The Flow System™ is not a new Agile or Lean framework. Indeed, it is not a framework at all, and it’s certainly not a one-size-fits-all solution. What is presented is a system of understanding, a system of learning. Many project management methods and agile frameworks concentrate on taskwork and planning with no regard to how an organization is structured to support these activities, seeing them simply as a linear progression of tasks. Scaling frameworks tend to struggle or simply not work as they do not recognize that they are operating in a complex adaptive system which can only scale through continuous decomposition and recombination, which they are unable to do with their rigid doctrines. Organizations and institutions utilize teams but fall short of developing teamwork skills and fail to restructure leadership to maximize the benefits that can be obtained from the utilization of teams. These shortcomings introduce additional constraints and barriers that prevent organizations and institutions from achieving a state of flow.


What’s Holding Back Data-Driven Healthcare?

Moorfields Eye Hospital scan
Healthcare is definitely a data-rich sector, so scarcity of information is not a problem – and the NHS database is particularly valuable with respect to other countries, since it has comprehensive records that go back decades. However, access to health data is often very difficult from a regulatory point of view, and there are extreme differences in terms of quality and accessibility. Typically, health data is messy, disperse and often siloed in a multitude of medical imaging archival systems, pathology systems, EHRs, electronic prescribing tools and insurance databases. While things are moving in the right direction, i.e., with the development of unified data formats such as Fast Healthcare Interoperability Resources, there is no easy and quick fix. No fancy algorithm can be developed without proper data collection and cleaning – and in many cases, this phase can take months. Until companies keep reinventing the wheel and developing their own internal tools for data cleaning with huge costs in terms of time and money, progress will be slow.


3 Modern Myths of Threat Intelligence

Many organizations don't know how to gain value from threat intelligence, and intelligence — cyber or not — doesn't help people who aren't willing to help themselves. If someone tells you that thieves are planning to rob your house tonight, what steps would you take to try to prevent it? You could lock the doors, hide your valuables, and maybe stay at a friend's house. However, none of that would guarantee that the crime wouldn't happen. I've noticed that organizations don't truly understand what it means to be "agile" when acting on threat intelligence. In my experience, an agile security team rapidly operationalizes and incorporates intelligence into detection processes, and deploys tools that work quickly to deliver detection. If you learn that a group is planning to hack your systems using a certain method, but you can't adjust your infrastructure or existing controls to defend against that method, intelligence is wasted. You are only as secure as the next steps you take after learning about a threat — and if you take them in the time you have before it hits.


IoT growth set to come from managed data analytics


According to CompTIA’s end-user data, there is a very slow technology adoption curve across various new trends, with only IoT and AI reaching critical mass. “Even amid all the hype, companies in the business of technology are starting to pull back on adopting new technology as part of their portfolio,” CompTIA noted in its IT industry outlook 2020 report. “This slight tap on the brakes suggests that classic situation where companies move too quickly into a new technology discipline or business model, only to have a reality check in year two or three.” CompTIA’s research also found that small and medium-sized businesses are struggling to integrate the various platforms, applications and data they need. While large businesses are able to use internal resources for integration, CompTIA noted that companies of all sizes may outsource to third parties for integration activities


Blockchain must overcome hurdles before becoming a mainstream technology


We like blockchain. At least, that's the takeaway from a recent TechRepublic Premium survey where the majority of respondents (87%) stated that blockchain will have a 'positive' effect on their industry, and 27% indicated a 'very positive' effect. However, thinking something and actually doing it are two different actions. Despite the enthusiasm for the technology, only 10% of those respondents actively use blockchain at their company. Blockchain appears on 13% of the strategic roadmaps for respondents' organizations, compared to 7% in 2018. Which industries will blockchain most likely impact? IT and technology was chosen by 58% of respondents, with professional services -- including finance, insurance, legal, and consulting -- a close second at 56%. Rounding out the top five cited industries were logistics & transport (45%), healthcare (41%), and retail & wholesale (37%). What needs to happen for the widespread adoption of blockchain? Two-thirds of respondents (66%) indicated the need for a clearly-stated business use case. A cryptocurrency operated by a government entity was suggested by 35% of respondents, while a company-controlled cryptocurrency was favored by 20%.



Quote for the day:


"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson


Daily Tech Digest - December 01, 2019

Data Scientists: Machine Learning Skills are Key to Future Jobs


SlashData queried some 20,500 respondents from 167 countries, which means this is a pretty comprehensive survey from a global perspective. Responses were additionally weighted in order to “derive a representative distribution for platforms, segments, and types of IoT [projects],” according to the report accompanying the data. According to the survey, some 45 percent of developers want to either learn or improve their existing data science/machine learning skills. This outpaces the desire to learn UI design (33 percent of respondents), cloud native development such as containers (25 percent), project management (24 percent), and DevOps (23 percent). “The analysis of very large datasets is now made possible and, more importantly, affordable to most due to the emergence of cloud computing, open-source data science frameworks and Machine Learning as a Service (MLaaS) platforms,” the report added. “As a result, the interest of the developer community in the field is growing steadily.”



Did You Forget the Ops in DevOps?


This person with deep operational knowledge was "too busy" fighting fires in production environments, and had not been included in the devops transformation conversations for this large organization. He worked for a different legal entity in a different building, despite being part of the same group, and he was about to leave due to lack of motivation. Yet the organization was claiming to do "devops". The action we took in this case was to take offline a number of experts who were effectively bottlenecks to the flow of work (if you’ve read the book "The Phoenix Project" you will recognize the "Brent" character here). We asked them to build the new components they needed with infrastructure-as-code under a Scrum approach. We even took them to a different city so they wouldn't get disturbed by their regular coworkers. After a couple of months, they rejoined their previous teams but now had a totally new approach of working. Even the oldest Unix sysadmin had now become an agile evangelist that preached infrastructure as code rather than manually hot fixing production.


Is your approach to enterprise architecture relevant in today’s world?

Is your approach to enterprise architecture relevant in today’s world?
In today’s fast-changing market, the role of enterprise architecture is more important than ever to prevent organisations from creating barriers to future change or expensive technical debt. To remain relevant, modern enterprise architecture approaches must be customer experience (CX)-driven, agile, and deliver the right level of detail just in time for when it needs to be consumed. Static business capabilities are no longer the only anchor point for architecting enterprise technology environments. CX is now a dominant driver of strategy and so businesses need to understand how stakeholders (customers, employees, partners, etc.) consume services and how they can be enabled by technology and platforms. The importance of capturing, managing, analysing and exposing data grows each year. Therefore, enterprise architecture needs to reinvent itself again to incorporate the needs of a rapidly evolving digital world. In a CX-driven planning approach, customer journeys are used to define the services and channels of engagement.


Edge Computing – Key Drivers and Benefits for Smart Manufacturing

Edge Computing – Key Drivers and Benefits for Smart Manufacturing
Edge computing means faster response times, increased reliability and security. A lot has been said about how the Internet of Things ( IoT ) is revolutionizing the manufacturing world. Many studies have already predicted more than 50 billion devices will be connected by 2020. It is also expected over 1.44 billion data points will be collected per plant per day. This data will be aggregated, sanitized, processed, and used for critical business decisions. This means unprecedented demand and expectations on connectivity, computational power, and speed of quality of service. Can we afford any latency in critical operations such as operator hand trapped in a rotor, fire situation, or gas leakage? This is the biggest driver for edge computing. More power closer to the data source-the “Thing” in IoT. Rather than a conventional central controlling system, this distributed control architecture is gaining popularity as an alternative to the light version of data center and where control functions are placed closer to the devices.


63% Of Executives Say AI Leads To Increased Revenues And 44% Report Reduced Costs

745,705 autonomous-ready vehicles will ship worldwide in 2023 according to Gartner
The McKinsey global survey found a nearly 25% year-over-year increase in the use of AI in standard business processes, with a sizable jump from the past year in companies using AI across multiple areas of their business; 58% of executives surveyed report that their organizations have embedded at least one AI capability into a process or product in at least one function or business unit, up from 47% in 2018; retail has seen the largest increase in AI use, with 60% of respondents saying their companies have embedded at least one AI capability in one or more functions or business units, a 35-percentage point increase from 2018; 74% of respondents whose companies have adopted or plan to adopt AI say their organizations will increase their AI investment in the next three years; 41% say their organizations comprehensively identify and prioritize their AI risks, citing most often cybersecurity and regulatory compliance. 84% of C-suite executives believe they must leverage AI to achieve their growth objectives, yet 76% report they struggle with how to scale AI;


How Europe’s AI ecosystem could catch up with China and the U.S.

McKinsey senior
Europe edges out the U.S. in total number of software developers (5.7 million to 4.4 million), and venture capital spending in Europe continues to rise to historically high levels. Even so, the U.S. and China beat Europe in venture capital spending, startup growth, and R&D spending. The U.S. also outpaces Europe in AI, big data, and quantum computing patents. A Center for Data Innovation study released last month also concluded that the U.S. is in the lead, followed by China, with Europe lagging behind. Multiple surveys of business executives have found that businesses around the world are struggling to scale the use of AI, but European firms trail major U.S. companies in this metric too, with the exception of smart robotics companies. This trend could be in part due to lower levels of data digitization, Bughin said. About 3-4% of businesses surveyed by McKinsey were found to be using AI at scale. The majority of those are digital native companies, he said, but 38% of major companies in the U.S. are digital natives compared to 24% in Europe.


Singapore government must realise human error also a security breach

Singapore must be tougher on firms that treat security as value-add service
More importantly, before dismissing man-made mistakes as "not a security risk", organisations such as the SAC need to consider the stats. "Inadvertent" breaches brought about by human error and system glitches accounted for 49% of data breaches, according to an IBM Security report conducted by Ponemon Institute, which estimated that human errors alone cost companies $3.5 million. In fact, cybersecurity vendor Kaspersky described employees as a major hole in an organisation's fight against cyber attacks. Some 52% viewed their staff as the biggest weakness in IT security, where their careless actions put the company's security strategy at risk. It added that 47% of businesses were concerned most about employees sharing inappropriate data via mobile devices, while careless or uninformed staff were the second-most likely cause of a serious security breach--second only to malware. Some 46% of cybersecurity incidents in the past year were attributed to careless or uninformed staff. Kaspersky further described human error on the part of staff as the "attack vector" that businesses were falling victim to.


6-essential practices to successfully implement machine learning solutions


Here’s a golden rule to remember: a machine learning algorithm is only as good as the data it’s fed. So, to use machine learning effectively, you must have the right data for the problem you’re trying to solve. And not just a few data points. Machines need a lot of data to learn — think hundreds of thousands of data points. Your data will need to be formatted, cleaned, and organized for your algorithm, and you will need two datasets: one to train the model and one to evaluate its performance. So after picking up the use cases, filter out the ones where there is data available and the ones that can quickly generate value across the board. Go for multiple smaller wins and have a clear data strategy. ... With a worldwide shortage of trained data scientists, you need to empower your data analytics professionals and other domain information experts with the tools and support they need to become citizen data scientists.


The hardest part of AI & analytics is not AI - it’s data management

The hardest part of AI & analytics is not AI, it’s data management image
“This is going to enable organisations to train their AI and ML algorithms with a more complete, more comprehensive and less biased sets of data.” According to Hanson, this can be done by using good data engineering tools with AI built-in. “What we actually need is not just artificial intelligence in the analytics layer — in terms of generating graphical views of data and making decisions in real-time around data — we need to make sure that we’ve got artificial intelligence in the backend to ensure we’ve got well-curated data going into our analytics engines.” He warned that if organisations fail to do this, they won’t see the benefit of analytical AI going forward. “In my opinion, a lot of mistakes could be made, some serious mistakes, if we don’t make sure that we train our analytical AI with high quality, well-curated data,” said Hanson. He added, if the data sets aren’t good, then AI advocates in organisations are not going to get the results they expect. This could hinder any future investment in the technology.


How to Advance Your Enterprise Risk Management Maturity

close up of bottom of a skateboarder's sneaker, in the middle of pushing skateboard forward
Before you can determine whether you want to advance your ERM maturity, you must first define your appetite for risk to make a proper assessment. Not all companies require the same level of risk maturity. In fact, the highest level of maturity does not necessarily equal the best ERM program. Rather than immediately aiming for the highest level of maturity, companies need to take a step back and identify their priorities to understand what is best for their organization’s specific circumstances. ... Effective risk culture is one that empowers business functions to be intellectually honest about the risks they face and encourages them to align risks with strategic objectives. To accomplish this, companies must remain patient. Changing a culture of any sized organization takes time and is not something that can be done by any single meeting or memo to the staff. It takes time to educate team members properly and for leaders to demonstrate the importance of the change. ... Once you determine who should hold primary responsibility for the risk management program and have received the necessary buy-in, you will need to measure your progress towards greater ERM maturity. One way to measure progress is to compare yourself to your peers.



Quote for the day:


"The science of today is the technology of tomorrow." -- Edward Teller


Daily Tech Digest - November 30, 2019

We’ve got to regulate the application of AI — not the tech itself


Another important factor that governments and businesses will need to be aware of will be in devising methods to prevent the rise of AI used with malicious intent, i.e. for hacking or fraudulent sales. Most cyber-experts predict that cyberattacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application. Stringent qualification processes will also need to be addressed for certain industries. For example, Broadway show producers have been driving ticket sales through an automated chatbot, with the show Wicked boasting ROI increases of up to 700 percent. This has also allowed producers to sell tickets for 20 percent higher than the average weekly price.  Regulations will need to address the fact that AI and bots have the potential to take advantage of consumers’ wallets, which means that policymakers will need to work closely with firms that are gradually beginning to rely on chatbots to make sure that consumer rights are not being breached.



How Smart Home Tech Is Shaking Up The Insurance Industry

Ring video doorbell
Through smart home devices, homeowners are able to remain connected to their property 24/7, whether at home, work or on holiday. In turn, this constant connectivity instils a psychological shift in householders, encouraging them to take a more proactive approach to home security and protection. ... For example, while water damage may not top the list of worries from homeowners, it can cost thousands of pounds to repair and is one of the most common types of domestic property damage claims. However, with a leak sensor installed, escaping water can be caught quickly and customers will even be alerted via a notification to their smartphone. This knowledge is critical, as homeowners are able to call out a plumber on the same day – at a fixed fee – and contain the damage. This proactivity benefits both sides. For insurers, responsible and safe homeowners pose less of a risk, resulting in lower premiums. It’s a win win all round. Moreover, the additional information gained from the steady stream of signals sent to the insurer from in-home sensors and monitors can allow claim handlers to remain better informed in the event of an incident.


Fintech Regulation Needs More Principles, Not More Rules


It is important to recognize that principles-based regulation is not a euphemism for “deregulation” or a “light-touch” approach—far from it. Principles-based regulation is a different way of achieving the same regulatory outcomes as rules-based regulation. But it simply does so in what is, in many cases, a more efficient and flexible manner. That flexibility also prevents subversion of those outcomes through the kind of loopholes that revealed the inherent vulnerability of rules-based regulation in the run up to the financial crisis. Of course, in practice, it is rare for to have either a purely principles-based or a purely rules-based regulation. Rather, they represent two ends of the regulatory spectrum. Every principles-based regulatory regime has some rules, and every rules-based regime has some element of principle. For this reason, we frequently see hybrid regulatory systems of principles and rules.


Singapore wants widespread AI use in smart nation drive


"Domestically, our private and public sectors will use AI decisively to generate economic gains and improve lives. Internationally, Singapore will be recognised as a global hub in innovating, piloting, test-bedding, deploying and scaling AI solutions for impact," said the SNDGO, which is part of the Prime Minister's Office. To kick off its efforts, the government identified five national projects that focused on key industry challenges, including intelligent freight planning in transport and logistics, chronic disease prediction and management in healthcare, and border clearance operations in national safety and security. These form part of nine sectors that have been earmarked for heightened deployment as AI is expected to generate high social and economic value for Singapore. These verticals include manufacturing, finance, cybersecurity, and government. The national AI strategy also outlined five key enablers that the government deemed essential in building a "vibrant and sustainable" ecosystem for AI innovation and adoption. A robust data architecture, for instance, would be necessary for the public and private sectors to manage and exchange information securely, so AI algorithms can have access to quality datasets for training and testing.


How To Thrive At Work: 10 Strategies Based On Brain Science

Brain science can help you thrive at work
In his book, The Shallows, Nicholas Carr demonstrates how our internet usage has rewired our brains. We think superficially, skimming, glancing and scanning rather than reading or processing more deeply. Cal Newport, in his book Deep Work, advocates for focusing, contemplating and concentrating. His contention is this distraction-free thinking has become increasingly rare and is a skill we must learn (or relearn). In fact, empathy—so critical to our humanity—is impossible without deeply considering others’ situations. And the ability to solve problems and develop ideas cannot happen effectively without depth of thought. Tell stories. While communicating facts tends to engage limited portions of the brain, hearing a story engages multiple parts of the brain. One study in particular, using an MRI found participants had greater understanding and retention of concepts based on the engagement of multiple parts of the brain. Other researchers, including Dr. Paul Zak, have demonstrated hearing stories that include conflicts and meaningful characters tend to engage us emotionally. The resulting release of oxytocin leads us to trust the messages and morals the story is trying to convey.


3 Reasons This Stock Is a Top Cybersecurity Pick

Hacker in a hoodie sitting with a laptop.
Check Point's research and development expenses increased 20% year over year while selling and marketing expenses rose nearly 10.5%. Both of these metrics outpaced the company's actual revenue growth. In fact, Check Point has stepped up its investment in both of these line items in the past year or so, and the positive impact is visible on the company's subscription growth. The company is now looking to get into lucrative cybersecurity niches as well. Check Point recently announced the acquisition of Internet of Things (IoT)-focused cybersecurity start-up Cymplify. Check Point will integrate Cymplify's expertise into its Infinity cybersecurity architecture so that clients can protect their IoT devices -- such as smart TVs, medical devices, and IP cameras -- against cyberattacks. This should open up a big growth opportunity for Check Point because according to IHS Markit, cybersecurity is the fastest-growing IoT niche. The firm predicts that the IoT data security market will grow from $3 billion in revenue this year to $7 billion in 2022 as more original equipment manufacturers (OEMs) move to secure their IoT devices.


5G radiation no worse than microwaves or baby monitors: Australian telcos

5g-towers-20180623205641.jpg
"When we've done our tests on our 5G network, they're typically 1,000 to 10,000 times less than what we get from other devices. So when you add all of that up together, it's all very low in terms of total emission. But you're finding that 5G is in fact a lot lower than many other devices we use in our everyday lives." Wood added there is no evidence for cancer or non-thermal effects from radio frequency EME. "There's some evidence for biological effects, but none of these are non-adverse," Wood told the committee. "So they've really looked at all of the research they need to set a safety standard, and in summary what they said is that, if you follow the guidelines, they're protective of all people, including children." On the issue of governmental revenue raising from its upcoming spectrum sale, Optus said it would be wrong of government to view it as a cash cow, as every dollar spent on spectrum is not used on creating networks. "Critically, in order to achieve the coverage and deployment required, 5G networks will require significant amounts of spectrum," the Singaporean-owned telco wrote.


How can businesses stop AI from going bad?

How can businesses stop AI from going bad? image
Starting from the very beginning of the process, CIO’s can help AI be “good” by ensuring that the data being used to create the algorithms is ethical and unbiased, itself. Gathering and using data from ethical sources significantly reduces the risk of harbouring toxic datasets which may infect systems with problematic biases further down the line. This is especially crucial for highly regulated industries, which will need to identify biases already present and remedy accordingly. Using insurance as an example, CIO’s should take care not to include data that heavily features one particular demographic, gender etc., which might augment averages and inform non-representative policies. Collecting a rich sample of ethical, GDPR compliant, representative data from consenting customers actually benefits the accuracy of the AI it powers, and it also reduces the work needed to “clean” it.


INNOPHYS Develops Muscle Suit for Physical Labor

INNOPHYS Develops Muscle Suit for Physical Labor Japanese Woman Carrying Load Crop
The suit can lift upwards of 30kg. While it won’t do the lifting on its own, it can take that weight off from its wearer. It offers support in the form of hydraulically-controlled artificial muscles which are housed in an aluminum backpack linked to the waist joints. The pack provides two axes of movement: one for bending at the waist and another for supporting the thighs. Controlling the suit can be done in two ways. The wearer can either blow into a tube or touch a control surface with their chin, thus creating a hands-free control system for the exoskeleton. The muscle suit is wrapped inside a custom, water-repellent bag. This protects the device from the elements and gives it a softer appearance. ... Many other Japanese companies have also taken the challenge of producing suits to assist in physical labor. Companies like HAL have already placed a stable foothold in the exoskeleton industry with their series of robotic suits. Nevertheless, the Muscle Suit is an awe-inspiring invention by this venture company from the Tokyo University of Science.



Yes—at least in some circumstances, both researchers said. Bordes’s group, for example, is creating a benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes. And Rossi said that, in some cases, A.I. could be used to highlight potential bias in models created by other artificial intelligence algorithms. While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment. “Addressing this issue is really a process,” Rossi told me. “When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice ... can bring unconscious bias.” You can read more about our discussion and watch a video here. ... “Yes, it is true that A.I. is only as good as the data it has been fed,” she said. But, she argued, this potentially gave people tremendous power.



Quote for the day:


"Whenever you see a successful business, someone once made a courageous decision." -- Peter F. Drucker


Daily Tech Digest - November 29, 2019

Cybersecurity: The web has a padlock problem - and your internet safety is at risk


Even now, encryption is sometimes discussed as if it's a bonus when using the internet, when it needs to become the standard way of doing things everywhere on the internet, Helme explained. "We need it to become so ingrained and embedded into everything that we do that it's boring and we don't need to talk about it because it shouldn't be special. Encryption should be the boring default that we don't need to talk about," he said. The security industry therefore needs to step up and help fix the issue, Helme argued, because by doing this, it takes the responsibility for deciding if a website is safe or not away from the user – something that will help make the internet safer for everyone. "We need to take encryption and make it the default, universal – it needs to be everywhere," he said, adding: "The lack of encryption on the web is actually a bug. And what we're doing now isn't adding a new feature for an improvement or a new thing: we're going back and fixing a mistake we made in the beginning."


Simplifying a data problem can ensure better buy-in

Vector wave lines flowing dynamic on blue background for concept of AI technology, digital,
Like many complex technical topics, an ability to share a relatable and very human story can engender action far more quickly than the most thoughtful technical arguments, or detailed integration diagrams combined. Similarly, an ability to find an impactful story can serve as a sanity check for your data-related projects. If you can't concisely articulate how gathering, sharing, or analyzing data can have a real impact on your business or its customers, then perhaps the project is not as valuable as you thought or will present an uphill battle for funding that may not have been obvious purely on the technical merits. Look for opportunities to condense your data-related endeavors into a simplified, relatable metric. Asking, "What if we had sales data a week earlier?" may more easily get funding for your data lake project than a 90-slide presentation about the merits of Hadoop. Similarly, you'll have a guiding objective for your data projects that's more readily understandable than a Gantt chart or status slide, and often is more successful at generating continued interest and excitement in the endeavor.


Will the future of work be ethical? Future leader perspectives


As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers. About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good. ... Today we had that debate about role or people’s jobs and robot taxes. That’s a very good debate to have, but it sometimes feeds a little bit into the AI hype and I think it may be a disgrace to society to try to pull back technology, which has been shown to have the power to save lives. It can be two transformations that are happening at the same time. One, that’s trying to bridge an inequality and is going to come in a lot of different and complicated solutions that happen at multiple levels and the second is allowing for a transformation in technology and AI.


Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument. This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.


The power and promise of AI in the coming year and beyond

The power and promise of AI in the coming year and beyond
AI advancements are also happening rapidly in the area of sales productivity. Over the past year, the level at which businesses are utilising AI to grow their business has skyrocketed. It’s become standard for companies to use AI to improve predictive business software and to make more effective decisions. Using heavy duty machine learning analytics as a standard business practice is now widely accepted. Looking even farther down the road, there are those who believe that computers will be just as smart as humans in about two decades. I personally love reading about the subject of singularity and quantum computing. It’s fascinating to hear about its potential. Naturally, one could argue that humans might not want computing to become as smart as us. We’ve all watched movies centered-around apocalyptic devastation! But, in my opinion, AI stands to improve our lives in ways that we have yet to consider, especially at home. While AI is becoming commonplace in customer service and sales, we are a long way from having a robot cooking us dinner or cleaning our apartments.



CISOs and CMOs – Joined At The Hip in the Era of Big Data

Today, data is the lifeblood of business. Businesses have access to copious amounts of consumer data that can be leveraged to gain a better understanding of their market and customer base. To the CMO, this is a gold mine – more detailed insight into the wants, needs, habits and activities of their target demographics. These can result in initiatives with large scopes and larger budgets. On the flip side, the CISO sees the red flags and vulnerabilities that come along with this information. Privacy and security threats, technological limitations, and reputational risk are all on the radar. Commonly their response is to reel the scope back in to reduce risk and budget. As you may expect, this can result in internal friction as to who is truly responsible for the management of this data, making it more important than ever for the CISO and CMO to establish an effective working relationship. In order for your organization to best capitalize on the benefits of big data, the CISO and CMO must work together cohesively.
CROP - Businessman on blurred background using digital artificial intelligence icon hologram 3D rendering - image courtesy of Depositphotos.
With AI-based technology, it’s possible to increase the efficiency, objectivity and accuracy of work on vehicle production lines, while enhancing safety and enabling a higher volume of work with the same amount of resources. By detecting faults at an early stage, we can prevent a potential breakdown and reduce maintenance costs over the lifetime of the vehicle. These faults might include loose bolts, incorrectly routed cables, damage to paintwork or underinflated tyres, to name a few examples. What’s more, with manual checks, manufacturers not only risk overlooking faults on their vehicles, but also waste time that could be more productively allocated elsewhere in the factory. An intelligent AI-based system greatly enhances speed and efficiency, improving the flow of vehicles through and out of the plant. vWith all of this in mind, we expanded the breadth and capabilities of UVeye’s technology to other areas of a vehicle’s exterior, such as the tyres and bodywork.


No Blockchain to Rule Them All
The benefits of 5G are huge compared to 4G: it offers much higher data speeds (1-20 Gbit/s), much lower latency (1 ms), increased capacity as the network grows and it uses very high frequencies (3.5 GHz). The challenge with 5G is that it requires a lot more antennas than 4G networks. This is because 5G uses millimetre waves, which are a lot shorter than 4G wavelengths. As a result, it can carry a lot more data, but it means a much shorter range. As a result, to achieve a reliable 5G signal, you need a lot more 5G antennas. Placing these antennas will take time, so it will take another 2-3 years before we will have a broad, reliable 5G network. However, until then, enterprises are already building their own private 5G network to enable machine-to-machine communication. 5G will be vital for the 4th industrial revolution, and the first successful pilots have been done. Earlier this year, Ericsson, Vodafone and eGO launched the first 5G car factory in Germany. 


Palo Alto Networks Employee Data Breach Highlights Risks Posed by Third Party Vendors


Palo Alto Networks has declined to name the vendor concerned, or provide details of where on the internet the data appeared, but it has said that it has terminated the contract of their careless vendor. We would all like to think that the companies we work for would put robust demands on those external firms that provide products and services that they will be careful with our data - whether it be information about our products and services, intellectual property, customers, or employees. But however much you may demand in a contract that your providers have proper security measures and practices in place to reduce the chances of a breach or hack, you can never have 100% certainty that accidents and goofs won't happen. All you can do is limit the amount of sensitive data that your external providers have access to, ensuring that they can only access the information that they absolutely need to do their job and no more.


The Implications of Last Week's Exposure of 1.2B Records

Data enrichment is a legal but controversial practice. "The industry exists for the purpose of influencing people and giving you access to people you want to influence," says Farrow, who says he has heard both sides of the argument. On one hand, employees often use this data to ensure they're not sending mailers to or cold-calling the wrong people. They could get the same information themselves on Facebook or LinkedIn; data aggregators speed up the process. At the same time, it "feels like an intrusion on our privacy," he says. Cybercriminals can use this leaked data to influence victims to their advantage. A leak like this gives attackers access to organized and meaningful information, as opposed to a broad data dump. It forces those affected to think twice about who they trust — about whether a message is legitimate or malicious. Further, there is a difference between this data leak and other security breaches in which credit card numbers or passwords are stolen.



Quote for the day:


"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer