Daily Tech Digest - June 16, 2019

While We Wait For Artificial Superintelligence, Let's Make The Most Of Augmented Intelligence

uncaptioned
Augmented intelligence has displayed unmatched potential in multiple industry sectors such as healthcare, retail, finance, manufacturing and many more. Just about every organization is already deploying or planning to use augmented intelligence for various applications. The ability of augmented intelligence to improve human capabilities has proved to be fruitful in the workplace. With the help of augmented intelligence, employee performance, productivity, and experience can grow at a staggering rate. Organizations must exploit augmented intelligence to its maximum potential to gain the best possible results and maintain a competitive edge. Augmented intelligence can effectively improve workplace productivity by automating various tasks. Routine and admin tasks require a workforce and consume a significant chunk of employee time. Such tasks can be easily automated with the help of augmented intelligence. Augmented intelligence has given rise to advanced solutions such as Robotic Process Automation, (RPA), for various industry sectors.


Data Governance: From Risk Management to Business Value

When data governance was just oriented around compliance, the scope of data and the governance requirements were controlled and prescriptive. This narrow focus made it possible to use manual processes for governance and stewardship activities. In the new world of business value-based data governance the sheer scale of data, and the collaboration required across all organizational functions makes automation critical to success. We now have data lakes with petabytes of data, being updated in real time with streaming sensor data, social data, and mobile location data. There are tens of thousands of users accessing the data across finance, sales, marketing, service, procurement, research and development, manufacturing, logistics, and distribution. It’s at least a thousand-fold increase in scale and complexity. At this scale the only way you will keep up is with AI-powered automation.


The Danger of Bias in an Al Tech Based Society


The intelligence of AI systems is learned from humans. By nature, humans are biased. We will usually want our national team to win against a rival. We will always be rooting for our own family members to succeed. Even though we may not realize it, deep in our subconscious lies bias.  Algorithmic bias occurs when the AI system acts in a way that reflects the implicit values of the humans who were involved in the data collection, selection, and programming. Despite the presumed neutrality of the data, algorithms are open to bias. Algorithmic bias often goes undetected. Bias is hidden in the depths of the mathematical programming of AI tech and means that important decisions go unchecked. This could have serious negative consequences for poorer communities and minority groups. ... If algorithms could accurately predict which defendants are likely to re-offend, the system could be made more selective about sentencing and more just. However, this would mean that the algorithms would have to be devoid of any type of bias to avoid exacerbating unwarranted and unjust disparities that are already far too common in the criminal justice system.



Addressing the Top Three Risk Trends for Banks in 2019

Most of the cybersecurity risk for banks comes from application security. The more banks rely on technology, the greater the chance they face of a security breach. Adding to this, hackers continue to refine their techniques and skills, so banks need to continually update and improve their cybersecurity skills. This expectation falls to the bank board, but the way boards oversee cybersecurity continues to vary: Twenty-seven percent opt for a risk committee; 25 percent, a technology committee and 19 percent, the audit committee. Only 8 percent of respondents reported their board has a board-level cybersecurity committee; 20 percent address cybersecurity as a full board rather than delegating it to a committee. Utilizing technological tools to meet compliance standards—known as regtech—was another prevalent theme in this year’s survey. This is a big stress area for banks due to continually changing requirements. The previous report indicated that survey respondents saw increased expenses around regtech.


Capturing value in machinery and industrial automation as market dynamics change

Capturing value in machinery and industrial automation as market dynamics change
Currently, most established players—OEMs, automation-device suppliers, and machine-control suppliers—are working on strategies to cope with shifting growth patterns and the resulting mix of unexpected high demand and declining growth in more mature technologies. At the same time, these players are preparing themselves to be best positioned to claim a share of the additional value expected to be created by digital manufacturing solutions, which we estimate will double to €32 billion worldwide by 2025. The disruptive trend of digitization also attracts new players to participate in the market, especially in the space of software, platforms, and application providers. This diversification challenges the foothold that established players have enjoyed on strategic control points, for example, the machine-control layer in the automation technology stack. While the strategic cornerstones are often obvious and similar across players—for instance, securing core business, capturing additional value from digitization, and increasing internal efficiency—the exact chances of success of individual strategic measures and the threat from competition remains uncertain.



Microsoft’s Ann Johnson: ‘Identity is the new perimeter’

Identity is the new perimeter and we identify identity as the human, the device, the data, the application – and all of these have a unique identity and all of these need to be updated, hashed and healthy. In the context of ML, we take all of those variables and put them in the ML engine and assign risk based on where the user is, what they are trying to access, how they authenticate and what device they are on. What we find with bad actors is that we are not seeing yet, in any meaningful way, production of malware that adapts in the wild that you would expect, but potentially in the future. We are not seeing yet any meaningful corruption with AI models or putting malicious data into ML engines to try to train it incorrectly. I do expect that there will be attack vectors and we are doing a tremendous amount of work with Microsoft Research to make sure we build those defences. But the good news is that we are not seeing it in any meaningful or wholesale way today, and that’s why I don’t think it is quite a race.


Managing the multicloud will require lots of AI – but people too

Cloud Mobile Phone Typing Smartphone Phone Finger
The more complex your multicloud becomes, the less likely it is that you’ll be able to entirely automate responses the vast range of underlying platform, application, service and other issues. Human-in-the-loop exception handling will become the order of the day for the long tail of rare cloud-computing use cases up and down this multilayered management plane. The more complex cloud management functions — including cost management, security and compliance, application development, deployment and operational management — will continue to rely on collaborative responses that skilled human IT personnel may need to improvise on the fly. The orchestration layer in the more complex cloud deployment use cases will need to drive human-response flows alongside entirely system-automated responses. The less common a specific incident or situation is, the less likely it is that there will be sufficient historical “ground truth” data for training the highly predictive statistical models upon which AI-driven automations depend. In many multicloud operational circumstances, AI-driven workflows will often span several tiers of IT support resources working in lockstep over indefinite periods.


Microsoft Edge Reddit AMA: Edge might come to Linux

Microsoft Edge
The biggest tease the company dropped was its apparent willingness to release an Edge version for Linux -- a move that was once considered inconceivable. "We don't have any technical blockers to keep us from creating Linux binaries, and it's definitely something we'd like to do down the road," the Edge team said. "That being said, there is still work to make them 'customer ready' (installer, updaters, user sync, bug fixes, etc.) and something we are proud to give to you, so we aren't quite ready to commit to the work just yet. "Right now, we are super focused on bringing stable versions of Edge first to other versions of Windows, and then releasing our Beta channels," Edge devs said. While the Chromium codebase on which the upcoming Edge version supports Linux builds, users were afraid that when Microsoft ripped out various Chromium features last year, it might have impacted Edge's ability to support cross-platform builds. However, today's comment comes to confirm a tweet published in April on the personal Twitter account of one of Edge's developers.


For better healthcare claims management, think “digital first”

For better healthcare claims management, think “digital first”
In the long-term vision, digital solutions would cover all steps within claims management. Because the process would be fully digital, very little human intervention would be needed. In this scenario, claims would be transferred in real time from a provider to a cloud solution containing all electronic health documents. Once a claim is transferred to the cloud, self-learning algorithms would automatically access it and perform real-time auditing using technical reference points, such as the claimant’s insurance status and benefits package, as well as medical reference points. Once robust self-learning algorithms have been established and trained using both existing data and expert knowledge, their efficiency will continue to improve over time. Ultimately, it would become possible to automate payers’ communications with providers and customers. For example, if further information was required to reach a decision about a specific claim, providers would be contacted automatically via a digital request form that would include an integrated first check for basic information.



Foundations Of Business Architecture


The work of creating and defining a business architecture is not meant as an academic exercise. A business architecture is based on the organization’s business strategy. The business architecture positions the organization to operate efficiently in pursuit of its goals. As defined, a business venture is about creating value. Value is demonstrated in the form of corporate profits or in returns to owners and shareholders. Corporate goals tend to be high-level and wide. Organizations use various processes and methods for capturing and documenting the corporate goals. The method used in capturing the corporate goals is less important than having the discipline, structure, and communication methods to support the creation and dissemination of the corporate goals across the entire organization. Used most effectively, corporate goals are developed within the context of a larger enterprise wide strategic planning function. Often, the process is used in creating the organization’s data strategy, which may occur during enterprise architecture planning.



Quote for the day:


"The most valuable thing you can make is a mistake - you can't learn anything from being perfect." -- Adam Osborne


Daily Tech Digest - June 15, 2019

This is likely the No. 1 thing affecting your job performance


To improve your expertise, you must first identify gaps in your knowledge. You aren’t likely to be motivated to learn new things–nor can you be strategic about learning–if you’re not aware of what you do and don’t know. Without a good map of the existing state of your knowledge, you’ll bump into crucial new knowledge only by chance. ... The ability to know what you know and what you don’t know is called metacognition—that is, the process of thinking about your thinking. Your cognitive brain has a sophisticated ability to assess what you do and don’t know. You use several sources of information to make this judgment. Research by Roddy Roediger and Kathleen McDermott identified two significant sources of your judgments about whether you know something: memory and familiarity. If I ask you whether you’ve heard of Stephen Hawking, you start by trying to pull information about him from your memory. If you recall explicitly that he was a famous physicist or that he worked on black holes and had ALS, then you can judge that you’ve heard of him.



Fintech CEOs bullish on blockchain tech, give thumbs down on applications


While cryptocurrency received something of a reprieve, financial services executives this week expressed doubts about the current applications for blockchain and other distributed ledger technology. “There’s too much hype around blockchain,” said Rishi Khosla, CEO and co-founder of U.K.-based challenger bank OakNorth. “For the practicality of what’s actually been delivered so far, it is way underrated. I do believe that blockchain has a place in lending, especially when you think about sort of the whole ‘perfecting security process’. It just requires so much changing of the plumbing.” Still, some nodded favorably toward the technology’s potential impact on the industry. Securities and Exchange Commission commissioner Robert Jackson said blockchain technology can both shorten the time and lower the expense of clearing and settling trades. He also pointed to potential use cases for auditing, smart contracts and tracking and dealing with fraud.


Blockchain: A Boon for Cyber Security


Blockchain technology has impacted the cyber security industry in a few ways. The HYPR Corp is a New York based company that provides enterprises with decentralised authentication solutions, which enable consumers and employees to securely and seamlessly access mobile, Web and Internet of Things (IoT) applications. It uses blockchain technology to decentralise credentials and biometric data to facilitate risk based authentication. It invested US$ 10 million in 2018 on this platform. NuCypher is another blockchain security company which uses distributed blockchain systems proxy re-encryption. It also has an accessible control platform and uses public-key encryption to securely transfer data and enforce access requirements. Blockchain is one of the biggest tech buzzwords in the last few years, and the technology is being marketed as a cure for everything including cyber security. The US Ministry of Internal Affairs and Communications implemented a blockchain based system for processing government tenders in March 2018.



Sensory Overload: Filtering Out Cybersecurity's Noise

A good security process is extremely valuable. Regardless of the task at hand, process brings order to the chaos and minimizes the redundancy, inefficiency, and human error resulting from lack of process. On the other hand, a bad security process can have exactly the opposite effect. Processes should help and improve the security function. In order to do so, they need to be precise, accurate, and efficient. If they aren't, they should be improved by filtering out the noise and boiling them down to their essence. It's far too easy to get distracted by every new security fad that comes our way. Once in a while, an item du jour becomes something that needs to be on our radar. But most of the time, fads come and go and seldom improve our security posture. Worse, they can pull us away from the important activities that do. Many of us don't know exactly what logs and event data we will or will not need when crunch time comes. As a result, we collect everything we can get our hands on. We fill up our available storage, shortening retention and impeding performance, although we may never need 80% of what we're collecting.


The Next Big Privacy Hurdle? Teaching AI To Forget


The lack of debate on what data collection and analysis will mean for kids coming of age in an AI-driven world leaves us to imagine its implications for the future. Mistakes, accidents, teachable moments—this is how children learn in the physical world. But in the digital world, when every click, view, interaction, engagement, and purchase is recorded, collected, shared, and analyzed through the AI behemoth, can algorithms recognize a mistake and understand remorse? Or will bad behavior be compounded by algorithms that are nudging our every action and decision for their own purposes? What makes this even more serious is that the massive amount of data we’re feeding these algorithms has enabled them to make decisions experientially or intuitively like humans. This is a huge break from the past, in which computers would simply execute human-written instructions. Now, advanced AI systems can analyze the data they’ve internalized in order to arrive at a solution that humans may not even be able to understand—meaning that many AI systems have become “black boxes,” even to the developers who built them, and it may be impossible to reason about how an algorithm made or came to a certain decision.


How To Choose The Right Approach To Change Management

Our analysis shows that when you aggregate all the stages in the most popular OCM change models into a 10-stage process, none of them really cover all the bases. In fact, the analysis shows that it you choose one of these models you are likely to miss around 40 per cent of the steps suggested by other models. The analysis also shows that the biggest gap in popular change models is in ‘Assessing the Opportunity or Problem Motivating the Change’ – arguably the most critical step in OCM. ... So where do we turn when there is no real evidence to support popular change management models? Lewin did build an evidence base on a different approach to OCM. Rather than a planned approach to change, Lewin argues for a more emergent approach. He suggests that that groups or organisations are in a continual process of adaptation – there is no freezing or unfreezing. So, what are the critical success factors for creating an organisational culture that can purposefully adapt to changing environments whilst maintaining current operations?



In the drive to improve customer experience, Marketing needs to develop this single customer view, which will allow extremely targeted marketing. It does not help if copious social and historic shopping data is collated and used to build a customer persona if the customer's mobile number or email address was captured incorrectly. Likewise, duplicate records and "decayed" (out of date) data create annoyances both to the customer and to the marketing department. Much research has gone into why data is inaccurate, and the same answer is always found: it is due to human error. While human error can create the initial quality issue, for instance, when customer information is being loaded by one of the company's employees, benign neglect is also a contributor. Periodic reviews of whether customer contact details have changed are required, as well as scrupulous attention to returned emails and failed SMS messaging experienced during a marketing campaign. It is interesting to note that "Inadequate senior management support" is given as a challenge by 21% of the respondents.


How Do We Think About Transactions in (Cloud) Messaging Systems?

The baseline that we need to come from, is that everything's interconnected with everything else and users are going to expect to connect with their data and to collaborate with other users on any set of data in real time across the globe.  Messaging systems were introduced as a way of providing some element of reliable message passing over longer distances. Consider the scenario where you're transferring money from one account to another. There isn't the possibility, nor is there the desire, for any bank to lock records inside the databases of any other bank around the planet. So messaging was introduced as a temporary place that's not in your database or in my database. And then we can move the money around through these high highly reliable pipes. And each step of the journey can be a transaction: from my database to an outgoing queue, and from my outgoing queue to an intermediary queue, from one intermediary queue to another intermediary queue, from there to your incoming queue, and from your incoming queue to your database. As long each one of those steps was reliable and transactional, the whole process could be guaranteed to be safe from a business perspective.


Identity Is Not The New Cybersecurity Perimeter -- It's The Very Core

uncaptioned
It suggests that security perimeters are still effective in a cloud-native world -- and they most certainly are not. I often like to say, “If identity is the new perimeter, then Bob in accounting is the new Port 80.” In this new cloud-first world, all a hacker needs to do is get one person in an organization to click a link and it's game over. With the compromised employee’s credentials in hand, they can walk right through your defenses undetected and rob you blind. For true security in the cloud, identity needs to move to the very core of a company’s cybersecurity apparatus. That’s because when there is no more perimeter, only identity can serve as the primary control for security. As advocates of zero trust security (myself included) advise, “Don’t trust, verify.” How do you do it? Making the transition to a security model that places identity at the center involves a cultural shift that spans a company’s people, processes and technology. Here are key insights on how to get started, based on 15 years of experience helping companies turn the corner on identity-based security



Developing and Managing Change Strategies with Enterprise Architecture

The reality of most enterprises with IT portfolios consisting of > 100 IT applications is that a combination of each replacement option is technically feasible and, given the right approach, perhaps even cost-effective. And by using LeanIX, Enterprise Architects and their stakeholders can leverage collaborative mechanisms and live data to quickly evaluate technologies to see which mixture of SQL Server 2008/2008 R2 alternatives match specific business strategies and then govern the transformation projects thereafter. By linking Business Capabilities to applications, and linking those applications to technology components like SQL Server, Enterprise Architects can review Business Capability maps as seen within LeanIX Reports like the Application Matrix to align improvements with essential organizational processes. In particular, alongside a series of configurable views like “Technology Risk” and “Lifecycle”, an Application Matrix Report shows Business Capabilities and their supporting technologies across geographical user groups to help Enterprise Architects base decisions on overlapping business needs.



Quote for the day:


"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman


Daily Tech Digest - June 14, 2019

How a service mesh helps manage distributed microservices

How a service mesh helps manage distributed microservices
What makes a service mesh unique is that it is built to accommodate the unique nature of distributed microservice environments. In a large-scale application built from microservices, there might be multiple instances of any given service, running across various local or cloud servers. All of these moving parts obviously makes it difficult for individual microservices to find the other services they need to communicate with. A service mesh automatically takes care of discovering and connecting services on a moment to moment basis so that both human developers and individual microservices don’t have to. Think of a service mesh as the equivalent of software-defined networking (SDN) for Level 7 of the OSI networking model. Just as SDN creates an abstraction layer so network admins don’t have to deal with physical network connections, a service mesh decouples the underlying infrastructure of the application from the abstract architecture that you interact with. The idea of a service mesh arose organically as developers began grappling with the problems of truly enormous distributed architectures. Linkerd, the first project in this area, was born as an offshoot of an internal project at Twitter.



How autonomous software testing could change QA


With conventional automation technology, testers have to invest considerable time into learning how to script each test scenario. On the other hand, with autonomous software testing, testers can spend more time training tools and contributing to QA management initiatives, said Theresa Lanowitz, co-founder and analyst at Voke. Autonomous testing frees testers to spend more time, for instance, helping the CIO or CEO tackle critical objectives around bringing AI into the organization to benefit the customer. And when autonomous tools mature, their capabilities will enable testers to spend more time exploring nonfunctional requirements of a project, such as performance and security. Once these tools fulfill their promise and have a proven track record, many software quality engineers will ditch test tools with scripted interfaces. "Capabilities [of traditional test tools] are going to be so far eclipsed by what these autonomous testing tools can do that you will leave that tool behind," she said.


How China’s first autonomous driving unicorn Momenta hunts for data

momenta
Momenta won’t make cars or hardware, Cao assured. Rather, it gives cars autonomous features by making their brains, or deep-learning capacities. It’s in effect a so-called Tier 2 supplier, akin to Intel’s Mobileye, that sells to Tier 1 suppliers who actually produce the automotive parts. It also sells directly to original equipment manufacturers that design cars, order parts from suppliers and assemble the final product. Under both circumstances, Momenta works with clients to specify the final piece of software. Momenta believes this asset-light approach would allow it to develop state-of-the-art driving tech. By selling software to car and parts makers, it not only brings in income but also sources mountains of data, including how and when humans intervene, to train its codes at relatively low costs. The company declined to share who its clients are but said they include top carmakers and Tier 1 suppliers in China and overseas. There won’t be many of them because a “partnership” in the auto sector demands deep, resource-intensive collaboration, so less is believed to be more. What we do know is Momenta counts Daimler AG as a backer. It’s also the first Chinese startup that the Mercedes-Benz parent had ever invested in, though Cao would not disclose whether Daimler is a client.



The Global Push to Advance AI

getty 84008188 7 rgb 1280x8001
While different nations often see matters of national policy in very different terms, there are times of nearly universal agreement. That’s the case today when it comes to commitments to fuel the advancement of artificial intelligence. Governments around the world agree on the importance of investing in AI initiatives. This point is underscored in a recent report by McKinsey Global Institute. The briefing notes that China and the United States are leaders in AI-related research activities and investments, followed by a second group of countries that includes Germany, Japan, Canada and the United Kingdom. Other countries that are on path to AI readiness include Belgium, Singapore, South Korea, Sweden, Brazil, India, Italy and Malaysia. There are lots of reasons for the focus on AI. One of them is economic growth. McKinsey says that its survey data suggests AI adoption could raise global GDP (gross domestic product) by as much as $13 trillion by 2030. This equates to about 1.2 percent additional GDP growth per year. Numbers like these suggest that nations have a lot to gain from AI investments.



5 Greentech Companies Pursuing Environmental Solutions

greentech
The Scottish company does this by using non-recyclable plastic waste to extend the bitumen used in road production. Not only does this give a new lease of life to plastic that would otherwise have been incinerated or ended up in landfill, it also reduces the amount of fossil fuels needed for road production, and results in a higher quality finished product. MacRebur’s roads can be found all over the UK, where the Department for Transport recently assigned £1.6m to extend the use of plastic roads in Cumbria. They have also begun operations in various countries around the world. ... North American based Recleim styles itself as a next generation recycling company. In partnership with Germany recycling technology company Adelmann Umwelt GmbH, it offers closed-loop recycling to businesses and organisations. This involves collecting materials to be recycled, processing them, and repurposing them to be used again. Recleim’s proprietary system includes a logistics operation to recover items from businesses and take them to their de-manufacturing plant, where they are cleanly and safely taken apart. This process recovers 95 per cent of components by weight from items such as refrigerators, other large appliances, and electronics.


When to use 5G, when to use Wi-Fi 6

abstract wireless communication network
Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage. Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases. While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks – both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.


The 3 critical AI research questions


The most critical piece, he says, is that today, most AI systems are built and require pretty substantial investment in data science, requiring some heavy data scientists and engineering types to build the systems and deploy them for enterprise use. “If you want to extend AI to a wide swath of users what we need to get to over time — and it’s not going to happen overnight — is some semi-autonomous tools,” Gold explains. “The equivalent of a word processor or Powerpoint that brings it down to the user level instead of having to go out and buy 5,000 data scientists that you can’t get anyway.” In other words, a tool in which you can define a problem you want to go solve for, or want to get information on, which then goes out and builds the AI system, the learning system, the inference system that will allow you to do that. ... All the major chip players are adding an NNP (neural network processor) to their chips, Gold says, and the next question becomes how to best do that. There are a number of arguments about that as well. Some companies are focusing on the training side, and others are focusing on the inference side, which are two ways of optimizing the architecture. Ultimately, he says, you’ll need both.


Google Researcher Details Windows Cryptographic Library Bug

The problem, Ormandy writes, start within SymCrypt, which is the primary library for implementing symmetric cryptographic algorithms in Windows 8 and newer operating systems. These algorithms create a single, secret key that is used for both encryption and decryption. The bug essentially creates a never-ending loop within this cryptographic library, Ormandy says. "There's a bug in the SymCrypt multi-precision arithmetic routines that can cause an infinite loop when calculating the modular inverse on specific bit patterns with bcryptprimitives!SymCryptFdefModInvGeneric," Ormandy writes. As part of his research, Ormandy constructed a special X.509 certificate - a recognized public key infrastructure standard - that would trigger the bug by not allowing the system to complete the verification process. Because the certificate is embedded in a secure message or protocol, it can bypass security measures. If one systems triggers the flaw, it can go on to affect an entire fleet of Windows devices, he writes. In addition to a denial-of-service attack, this flaw could also force the Windows devices to reboot, the researcher says.


AIOps early adopters tackle data quality issues


AIOps can augment enterprise IT ops teams as they cope with ever-larger numbers of increasingly complex IT infrastructure components. But AIOps tools are only as good as the data they're given. The earliest days of AIOps stoked fear that advanced data analytics algorithms attached to automated machines will replace human IT experts, but those fears remain far-fetched at best. Early adopters say AIOps tools are far from a magic bullet, and IT ops jobs are safe, even as organizations use artificial intelligence and machine learning tools to sort through infrastructure monitoring data, reduce alert noise and, in some cases, investigate or resolve the causes of incidents. The effectiveness of AIOps software also remains limited by how solidly human IT pros build the data pipelines that feed it and how well human operators in IT and business interpret its results. "In many situations, we help customers realize they don't actually have the right data in place," said Amer Deeba, COO of Moogsoft, an AIOps software vendor in San Francisco.


Middle East has a big problem: It loves tech but can't stop blocking it

"Through their cybercrime laws, the GCC countries have sought to get a stronger grip on social media and to stymie the potential for spillover via online platforms of political unrest from other Arab countries," Hakmeh notes. Other countries are following suit. The Palestinian Authority blocked several news websites in June 2017, a month before a new cybercrime law was enacted. Meanwhile, in Egypt, a 2018 law classified social-media accounts with more than 5,000 followers as media outlets. "Under the new law, social-media users with a large following can be subject to prosecution for spreading false news or inciting crime," Arab News explained. "The law prohibits the establishment of websites without first obtaining a license from the Supreme Council for the Administration of the Media, a government body with authority to legally suspend or block websites in violation of the country's strict laws, and penalize editors with hefty fines."




Quote for the day:

"Always and never are two words you should always remember never to use." -- Wendell Johnson