Daily Tech Digest - May 26, 2020

Real Time Matters in Endpoint Protection

istock 1048305600
And the problem is pervasive. According to a report from IDC, 70% of all successful network breaches start on endpoint devices. The astonishing number of exploitable operating system and application vulnerabilities makes endpoints an irresistible target for cybercriminals. They are not just desirable because of the resources residing on those devices, but also because they are an effective entryway for taking down an entire network. While most CISOs agree that prevention is important, they also understand that 100% effectiveness over time is simply not realistic. In even the most conscientious security hygiene practice, patching occurs in batches rather than as soon as a new patch is released. Security updates often trail behind threat outbreaks, especially those from malware campaigns as opposed to variants of existing threats. And there will always be that one person who can’t resist clicking on a malicious email attachment. Rather than consigning themselves to defeat, however, security teams need to adjust their security paradigm. When an organization begins to operate under the assumption that every endpoint device may already be compromised, defense strategies become clearer, and things like zero trust network access and real time detection and defusing of suspicious processes become table stakes.

The Problem with Artificial Intelligence in Security

There is a lot of promise for machine learning to augment tasks that security teams must undertake — as long as the need for both data and subject matter experts are acknowledged. Rather than talking about "AI solving a skill shortage," we should be thinking of AI as enhancing or assisting with the activities that people are already performing. So, how can CISOs best take advantage of the latest advances in machine learning, as its usage in security tooling increases, without being taken in by the hype? The key is to come with a very critical eye. Consider in detail what type of impact you want to have by employing ML and where in your overall security process you want this to be. Do you want to find "more bad" or do you want to help prevent user error or one of the other many possible applications? This choice will point you toward different solutions. You should ensure that the trade-offs of any ML algorithm employed in these solutions are abundantly clear to you, which is possible without needing to understand the finer points of the math under the hood.

Strategy never comes into existence fully formed. Today, for example, we know that part of Ikea’s strategy is to produce low-cost furniture for growing families. We also know that, behind the scenes, Ikea innovates with its products and supply chain. Once upon a time, the founder of Ikea did not sit at his kitchen table to create this strategy. And he absolutely did not use a Five Forces template or a business-model canvas. What happened was that, once the business had started and as time passed, events shaped Ikea and, of course, Ikea shaped events. ... Strategy patterns form a bridge between expert strategists, those who have walked the walk, and those who are less experienced. They accelerate the production of new strategies, reduce the number of arguments that arise from uncertainty, and help groups to align on their next actions. By using patterns, those less experienced can benefit from knowledge they haven't had time to build on their own. At the same time, patterns give experienced strategists a rubric that lets them teach strategy.

Blazor Finally Complete as WebAssembly Joins Server-Side Component

Blazor, part of the ASP.NET development platform for web apps, is an open source and cross-platform web UI framework for building single-page apps using .NET and C# instead of JavaScript, the traditional nearly ubiquitous go-to programming language for the web. As Daniel Roth, principal program manager, ASP.NET, said in an announcement post today, Blazor components can be hosted in different ways, server-side with Blazor Server and now client-side with Blazor WebAssembly. "In a Blazor Server app, the components run on the server using .NET Core. All UI interactions and updates are handled using a real-time WebSocket connection with the browser. Blazor Server apps are fast to load and simple to implement," he explained. "Blazor WebAssembly is now the second supported way to host your Blazor components: client-side in the browser using a WebAssembly-based .NET runtime. Blazor WebAssembly includes a proper .NET runtime implemented in WebAssembly, a standardized bytecode for the web. This .NET runtime is downloaded with your Blazor WebAssembly app and enables running normal .NET code directly in the browser."

How event-driven architecture benefits mobile UX

At its most basic, the EDA consists of three types of components: event producers, event channels and event consumers. They may be referred to by other names, but most EDA systems follow the same basic outline. The producers and consumers operate without knowledge of or dependencies on each other, making it possible to develop, deploy, scale and update the components independently. The events themselves tie the decoupled pieces together. A producer can be any application, service or device that generates events for publishing to the event channel. Producers can be mobile applications, IoT devices, server services or any other systems capable of generating events. The producer is indifferent to the services and systems that consume the event and is concerned only with passing on formatted events to the event channel. The event channel provides a communication hub for transferring events from the producers to the consumers.

Microsoft Teams Rooms: Switch to OAuth 2.0 by Oct 13 or your meetings won't work

While it is simple to set up, it exposes credentials to attackers capturing them on the network and using them on other devices. Basic Authentication is also an obstacle to adopting multi-factor authentication in Exchange Online, said Microsoft.  Microsoft intends to turn off Basic Authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP and Remote PowerShell on October 13, 2020. It's encouraging customers to use the OAuth 2.0 token-based 'Modern Authentication'.  After installing the Teams Room update, admins will be able to configure the product to use Modern Authentication to connect to Exchange, Teams, and Skype for Business services. This move reduces the need to send actual passwords over the network by using OAuth 2.0 tokens provided b Azure Active directory. While the change is optional until October 13, Microsoft suggests login problems could arise after the cut-off date for Microsoft Teams Rooms configured with basic authentication. "Modern authentication support for Microsoft Teams Rooms will help ensure business continuity for your devices connecting to Exchange Online," it said. But it will let customers choose when to switch to modern authentication until October 13. 

Digital Transformation without the Judgement

CEOs have to focus ruthlessly on a small number of priorities. One customer in the rail industry went for approval of an SAP S/4HANA project, and the CFO saw the 8-figure budget and asked the CIO: would you like me to approve this project, or buy one more locomotive this year? You might be thinking “buy the train,” but it’s not that simple. What if this IT project improved rail network throughput by 2%, or decreased the chances of a derailment by 10%? What if it provided efficiencies in cargo prioritisation that meant two fewer locomotives needed to be in service? What are your priorities? How might they be achieved by IT investments? Today’s new hires are the Instagram generation. They primarily share images on Social Media, not diatribes about their personal life. Tomorrow’s new hires will be the Snapchat and TikTok generation, and before we know it, there will be a generation of employees who have never used a laptop. That might be an exaggeration, but the new generation of workers expect to have an excellent user experience for the tools they use in the workplace. If you want to hire the best talent, you are going to need to think about their needs.

Introducing Project Tye

Project Tye is an experimental developer tool that makes developing, testing, and deploying microservices and distributed applications easier. When building an app made up of multiple projects, you often want to run more than one at a time, such as a website that communicates with a backend API or several services all communicating with each other. Today, this can be difficult to setup and not as smooth as it could be, and it’s only the very first step in trying to get started with something like building out a distributed application. Once you have an inner-loop experience there is then a, sometimes steep, learning curve to get your distributed app onto a platform such as Kubernetes. ... If you have an app that talks to a database, or an app that is made up of a couple of different processes that communicate with each other, then we think Tye will help ease some of the common pain points you’ve experienced.

Containers as an enabler of AI

Containers as an enabler of AI header
The use of containers can greatly accelerate the development of machine learning models. Containerized development environments can be provisioned in minutes, while traditional VM or bare-metal environments can take weeks or months. Data processing and feature extraction are a key part of the ML lifecycle. The use of containerized development environments makes it easy to spin up clusters when needed and spin them back down when done. During the training phase, containers provide the flexibility to create distributed training environments across multiple host servers, allowing for better utilization of infrastructure resources. And once they're trained, models can be hosted as container endpoints and deployed either on premises, in the public cloud, or at the edge of the network. These endpoints can be scaled up or down to meet demand, thus providing the reliability and performance required for these deployments. For example, if you're serving a retail website with a recommendation engine, you can add more containers to spin up additional instances of the model as more users start accessing the website.

Google Open-Sources AI for Using Tabular Data to Answer Natural Language Questions

Co-creator Thomas Müller gave an overview of the work in a recent blog post. Given a table of numeric data, such as sports results or financial statistics, TAPAS is designed to answer natural-language questions about facts that can be inferred from the table; for example, given a list of sports championships, TAPAS might be able to answer "which team has won the most championships?" In contrast to previous solutions to this problem, which convert natural-language queries into software query languages such as SQL, which then run on the data table, TAPAS learns to operate directly on the data and outperforms the previous models on common question-answering benchmarks: by more than 12 points on Microsoft's Sequential Question Answering (SQA) and more than 4 points on Stanford's WikiTableQuestions (WTQ). Many previous AI systems solve the problem of answering questions from tabular data with an approach called semantic parsing, which converts the natural-language question into a "logical form"---essentially translating human language into programming language statements.

Quote for the day:

"Leadership is not a solo sport; if you lead alone, you are not leading." -- D.A. Blankinship

Daily Tech Digest - May 25, 2020

The Best Approach to Help Developers Build Security into the Pipeline

DevOps culture and the drive to work faster and more efficiently affects everyone in the organization. When it comes to creating software and applications, though, the responsibility for cranking out code and producing quality code falls on developers. The pace of DevOps culture doesn’t allow for anything to be an afterthought. It’s important for developers to support security directly as a function of application development in the first place, and to operationalize security within the continuous integration/continuous deployment (CI/CD) pipeline. Unfortunately, traditional education does little to prepare them. It’s possible to get a PhD in computer science and never learn the things you need to know to develop secure code. As organizations embrace DevSecOps and integrate security in the development pipeline, it’s important to ensure developers have the skills necessary. You also need to focus on both the “why” and the “how” in order to build a successful DevSecOps training program. Not all training is created equal.

Adversarial AI: Blocking the hidden backdoor in neural networks

gradient descent local minima
Adversarial attacks come in different flavors. In the backdoor attack scenario, the attacker must be able to poison the deep learning model during the training phase, before it is deployed on the target system. While this might sound unlikely, it is in fact totally feasible. But before we get to that, a short explanation on how deep learning is often done in practice. One of the problems with deep learning systems is that they require vast amounts of data and compute resources. In many cases, the people who want to use these systems don’t have access to expensive racks of GPUs or cloud servers. And in some domains, there isn’t enough data to train a deep learning system from scratch with decent accuracy. This is why many developers use pre-trained models to create new deep learning algorithms. Tech companies such as Google and Microsoft, which have vast resources, have released many deep learning models that have already been trained on millions of examples. A developer who wants to create a new application only needs to download one of these models and retrain it on a small dataset of new examples to finetune it for a new task. The practice has become widely popular among deep learning experts. It’s better to build-up on something that has been tried and tested than to reinvent the wheel from scratch.

Two years on: Has GDPR been taken seriously enough by companies?

Two years on: Has GDPR been taken seriously enough by companies? image
Currently, not all organisations have a robust data governance, data privacy or data management strategy in place. Many see implementing extra technology as a cost, but the technology deployed for GDPR compliance can also help to implement a robust data management strategy, as well as with achieving compliance. Thinking about these technologies as a balancing act between increasing risk and cost, and more exposure for new opportunities to a business, has led many to differentiate and innovate at a slower pace, taking more time than they need to undergo digital transformation and implement a robust data strategy that accelerates value creation. It has never been easier to utilise technology to support organisations in automating a good data management strategy. Five years ago, if you wanted to carry out a data audit of your sensitive information, it was often a manual, laborious and time-consuming process.

A Primer on Data Drift

Monitoring model performance drift is a crucial step in production ML; however, in practice, it proves challenging for many reasons, one of which is the delay in retrieving the labels of new data. Without ground truth labels, drift detection techniques based on the model’s accuracy are off the table. ... If we have the ground truth labels of the new data, one straightforward approach is to score the new dataset and then compare the performance metrics between the original training set and the new dataset. However, in real life, acquiring ground truth labels for new datasets is usually delayed. In our case, we would have to buy and drink all the bottles available, which is a tempting choice… but probably not a wise one. Therefore, in order to be able to react in a timely manner, we will need to base performance solely on the features of the incoming data. The logic is that if the data distribution diverges between the training phase and testing phase, it is a strong signal that the model’s performance won’t be the same.

Why Data Science Isn't Primarily For Daya Scientists Anymore

Jonny Brooks-Bartlett, data scientist at Deliveroo, puts his finger on the crux of the problem. “Now if a data scientist spends their time only learning how to write and execute machine learning algorithms, then they can only be a small (albeit necessary) part of a team that leads to the success of a project that produces a valuable product,” he says. “This means that data science teams that work in isolation will struggle to provide value! Despite this, many companies still have data science teams that come up with their own projects and write code to try and solve a problem.” Without the engineers, analysts, and other team members that you need to complete your projects, you give your data scientists work that they are overqualified for and don’t enjoy. It’s not surprising that they then deliver poor results. Unfortunately, they also get in the way of the work that needs to be done by engineers or developers. Iskander, principal data scientist at DataPastry, puts it succinctly. “I have a confession to make. I hardly ever do data science,” he admits That’s because he’s repeatedly asked to fill roles that don’t require his specialized skills.

Unleashing the power of AI with optimized architectures for H20.ai

h20 ai image jpg
H2O.ai is the creator of H20, a leading machine learning and artificial intelligence platform trusted by hundreds of thousands of data scientists and more than 18,000 enterprises around the world. H20 is a fully open‑source distributed in‑memory AI and machine learning software platform with linear scalability. It supports some of the most widely used statistical and ML algorithms — including gradient boosted machines, generalized linear models, deep learning and more. H2O is also incredibly flexible. It works on bare metal, with existing Apache Hadoop or Apache Spark clusters. It can ingest data directly from HDFS, Spark, S3, Microsoft Azure Data Lake and other data sources into its in‑memory distributed key value store. To further simply AI, H2O has leading-edge AutoML functionality that automatically runs through algorithms and their hyperparameters to produce a leaderboard of the best performing models. And under the hood, H2O takes advantage of the computing power of distributed systems and in‑memory computing to accelerate ML using industry parallelized algorithms, which take advantage of fine‑grained in‑memory MapReduce.

The missing link in your SOC: Secure the mainframe

Simply hiring the right person may seem obvious but hiring talent with either mainframe or cybersecurity skills is getting harder as job openings far outpace the number of knowledgeable and available people. And even if your company is able to compete with top dollar salaries, finding the unique individual with both of these skills may still prove to be infeasible. This is where successful organizations are investing in their current resources to defend their critical systems. This often takes the form of on-the-job training through in-house education from senior technicians or technical courses from industry experts. A good example of this is taking a security analyst with a strong foundation in cybersecurity and teaching the fundamentals of the mainframe. The same security principles will apply, and a talented analyst will quickly be able to understand the nuances of the new operating system which in turn will provide your SOC with the necessary skills to defend the entire enterprise, not just the Windows and Linux systems that are most prevalent.

3 Ways Every Company Should Prepare For The Internet Of Things

3 Ways Every Company Should Prepare For The Internet Of Things
The IoT refers to the ever-growing network of smart, connected devices, objects, and appliances that surround us every day. These devices are constantly gathering and transmitting data via the internet – think of how a fitness tracker can sync with an app on your phone – and many are capable of carrying out tasks autonomously. A smart thermostat that intelligently regulates the temperature of your home is a common example of the IoT in action. Other examples include Amazon Echo and similar smart speakers, smart lightbulbs, smart home security systems; you name it. These days, pretty much anything for the home can be made "smart," including smart toasters, smart hairbrushes and, wait for it, smart toilets. ... Wearable technology, such as fitness trackers or smart running socks (yes, these are a thing too), also fall under the umbrella of the IoT. Even cars can be connected to the internet, making them part of the IoT. Market forecasts from Business Insider Intelligence predict that there will be more than 64 billion of these connected devices globally by 2026 – a huge increase on the approximately 10 billion IoT devices that were around in 2018.

Why the UK leads the global digital banking industry

London financial district
The UK remains a front runner for its supportive regulatory approach to innovation in financial services. In 2015, the UK was the first nation to put into operation its own regulatory fintech sandbox to enable innovation in products and services. In fact, the success of the UK’s fintech investment led to a whole host of nations including Singapore and Australia announcing their plans for fintech sandboxes at the end of 2016, according to the Financial Conduct Authority. Government policy makers and regulatory bodies in the UK have created a progressive, open-minded and internationally focused regulatory scheme. The launch of Payment Services Directive (PSD2) inspired the creation of Open Banking and a new wave of innovation. A report by EY revealed that 94% of fintechs are considering open banking to enhance current services and 81% are using it to enable new services. The use of open APIs enable third parties access to data traditionally held by incumbent banks, meaning that fintechs can use these insights to produce new products and services.

Facial Recognition
The legal conversation around facial recognition is a hot topic around the world. In the US, for example, the government dropped the compulsory use of facial recognition of citizens in airports at US borders at the end of 2019. Also, last year, New York legislators updated local privacy laws to prohibit “use of a digital replica to create sexually explicit material in an expressive audiovisual work” (otherwise known as DeepFake tech) to counter this increasing threat. These decisions show that despite its many benefits, facial recognition could also have negative impacts on personal privacy and liberty. Consider the use of facial recognition by law enforcement, where in certain situations public spaces are monitored without the public’s knowledge via CCTV and bodyworn cameras. My take on this is that the only faces stored on the databases at the back-end of this technology should be those of convicted criminals, not everyday people. The data should never be used to ‘mine’ faces – a term which refers to the gathering and storage of information about peoples faces – as this isn’t ethical.

Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - May 24, 2020

Capital One data breach latest example of constant cyber security threats

Experts: Capital One data breach latest example of constant cyber security threats
The list of corporate victims includes Yahoo, Marriott, Equifax, eBay, Target and Facebook. Even the U.S. Postal Service and the IRS have experienced major data breaches. Five years ago, hackers accessed sensitive data of more than 60,000 UPMC workers. The increase in security breaches is an indicator of how far technology and security companies have to go, said Bryan Parno, a Carnegie Mellon University computer science and engineering professor and member of the school’s Security and Privacy Institute, or CyLab. He attributed the increased number of breaches to information becoming digitized and a more sophisticated criminal economy. To help fight against breaches, places like CyLab are exploring ways to build more secure software and networks that can detect when somebody infiltrates a network. But limited laws surrounding data breaches can also impact how well companies protect against threats, Parno said. In Pennsylvania, companies that store or manage computerized data, including personal information, are required to give a public notice in event of a breach in the security system.

Why Cyberthreats Tied to COVID-19 Could Hit Diverse Targets

Besides hospitals and academic institutions, dozens of nonprofits, including so-called "nongovernmental organizations" - or NGOs - around the world must protect their COVID-19 research and related activities from those seeking to steal data or disrupt their operations, says cyber risk management expert Stanley Mierzwa of Kean University.A wide variety of these nonprofit organizations are potential targets for cyberattacks during the COVID-19 pandemic. These include those that exist to "advance science around the world with research and serving to advance particular missions," he says in an interview with Information Security Media Group. Other nonprofits work on policy issues or public health concerns, he notes. "They often research and recommend strategies to governments in countries and can be involved with implementing programs," he says. "Any of these could be targeted for cyberattacks if they are involved in pursuing COVID-19 research activities ... including the response to COVID-19."

In an typical application development project, we have quality assurance (QA) and testing processes, tools, and technologies that can quickly spot any bugs or deviations from established programming norms. We can run our applications through regression tests to make sure that new patches and fixes don’t cause more problems and we have ways to continuously test our capabilities as we continuously integrate them with increasingly more complex combinations of systems and application functionality. But here is where we run into some difficulties with machine learning models. They’re not code per se in that we can’t just examine them to see where the bugs are. If we knew how the learning was supposed to work in the first place, well, then we wouldn’t need to train them with data would we? We’d just code the model from scratch and be done with it. But that’s not how machine learning models work. We derive the functionality of the model from the data and through use of algorithms that attempt to build the most accurate model we can from the data we have to generalize to data that the system has never seen before. We are approximating, and when we approximate we can never be exact. So, we can’t just bug fix our way to the right model.

Data for good: building a culture of data analytics

Leaders need to cultivate a culture of data science and analytics from the top down. Data literacy should be viewed as a crucial skill and you need to empower workers at all levels of your organisation to work with data. In order to avoid a digital divide, data must be easily accessible. By democratising data, you enable ordinary people — not just trained statisticians — to solve complex data science challenges. Once you combine democratised data with human creativity, you can solve almost any problem. We managed to get to the moon using a slide rule back in the ’60s. This perfectly illustrates what the power of a little bit compute plus liberated thinking can deliver. Combining data with human thinking could help us to solve all sort of societal and technological challenges, covering everything from healthcare to climate change and space travel, and the future of autonomous vehicles. We help some of the biggest businesses in the world to revolutionise their business through data science and analytics. 

Mercedes software leaks via Git and Google dork

In this GitLab instance, bad actors could register an account on Daimler’s code-hosting portal and download over 580 Git repositories containing the Mercedes source code and sell that information to the company’s competitors. … Additionally, hackers could leverage the exposed passwords and API tokens of Daimler’s systems to access and steal even more of the company’s sensitive information. ...  Without a proactive approach to security, companies open themselves up to undue risk. Most organisations rely on detecting risks and misconfigurations in the cloud at runtime … instead of preventing them during the build process, which increases security and compliance risks significantly. It also interferes with productivity, as developers have to spend their time addressing the issues. … Organizations should ‘shift left’ by taking preventative measures early on in their … CI/CD pipelines. … Such a proactive approach will allow organizations to prevent security issues from occurring and will enable security teams to catch misconfigurations before leaks occur.

Fintech Regulations in the United States Compared to Regulations in Europe and Asia

AML regulations in Europe are under a complete Anti-Money Laundering Directive. Although the article “Regulation of FinTech Must Strike a Better Balance between Market Stimulation and the Security and Stability of the Financial and Economic System” has a lengthy title, it perfectly describes the article’s content (“MIL-OSI Europe”, 2018). The article outlines the European Economic and Social Committee’s criticism and beliefs regarding the European Commission’s Action Plan for regulating fintech. Identifying the risk of certain fintechs and later deciding regulations does not indicate that the EESC believes that deregulation is the key. Instead, the EESC notes that deregulation actually causes higher risk to using those fintechs, and that it is unfair for traditional banking services if fintechs lack regulations or are completely deregulated. The EU has enacted the Anti-Money Laundering Directive for member countries to implement.

Mainstream enterprises increasingly behave like software vendors

Ultimately, reusable sets of API calls and data abstractions that scale workflow across multiple enterprise applications are required to build an open platform architecture, according to Richard Pulliam, principal at 2Disrupt and a contributor to the Cloud Elements report. "The ERP used to be the mission-critical system taking data from all points of the business to help it run more efficiently. This is why ERPs are inclusive of larger suites of software like CRM, marketing automation, customer support, and more. But as the volume of data grows and customers desire to use best-of-breed cloud applications to solve specific functions, the ERP no longer holds all the mission-critical data." On average, both enterprise and software vendor respondents selling digital platforms want to add dozens of new integrations in the year ahead -- 34 on average. Most enterprise respondents listed authentication, custom objects, and workflows as the most challenging aspects of API integration.

8 states targeted in CARES Act scams from cybercrime group

Due to the economic crisis caused by the coronavirus pandemic, states have been overburdened trying to get money to the more than 34 million Americans who are now unemployed. Most states have received an extraordinary amount of applications for funding, making it nearly impossible for their short-staffed agencies to thoroughly vet each request. More than $48 billion in unemployment insurance payments was sent out by states through the month of April. Cybercriminals with Scattered Canary have taken advantage of the situation according to Peterson, who wrote that the group filed more than 80 fraudulent claims for CARES Act Economic Impact Payments and even more claims for unemployment insurance in Florida, Massachusetts, North Carolina, Oklahoma, Rhode Island, Washington, Wyoming and most recently Hawaii. Unfortunately, the IRS and some states have already sent the money out before being notified that the applications came from people who had their personal information stolen or misused by hackers within Scattered Canary.

Machine Learning: What Is It Really Good For?

AI artificial intelligence concept Central Computer Processors CPU concept
In other words, for many organizations, the best option with machine learning may be to buy an off-the-shelf solution. The good news is that there are many on the market—and they are generally affordable. But regardless of what path you take, there needs to be a clear-cut business case for machine learning. It should not be used just because it is trendy. There also needs to be sufficient change management within the organization. “One of the greatest challenges in implementing machine learning and other data science initiatives is navigating institutional change—getting a buy-in, dealing with new processes, the changing job duties, and more,” said Ingo Mierswa, who is the founder and president of RapidMiner. Then what are the use cases for machine learning? According to Alyssa Simpson Rochwerger, who is the VP of AI and the Data Evangelist at Appen: “Machine learning can solve lots of different types of problems. But it's particularly well suited to decisions that require very simple and repetitive tasks at large scale.

Jepsen Disputes MongoDB’s Data Consistency Claims

MongoDB’s default level of read concern allows aborted reads: readers can observe state that is not fully committed, and could be discarded in the future. As the read isolation consistency docs note, “Read uncommitted is the default isolation level”. We found that due to these weak defaults, MongoDB’s causal sessions did not preserve causal consistency by default: users needed to specify both write and read concern majority (or higher) to actually get causal consistency. MongoDB closed the issue, saying it was working as designed, and updated their isolation documentation to note that even though MongoDB offers “causal consistency in client sessions”, that guarantee does not hold unless users take care to use both read and write concern majority. A detailed table now shows the properties offered by weaker read and write concerns. ... Clients observed a monotonically growing list of elements until [1 2 3 4 5 6 7], at which point the list reset to [], and started afresh with [8]. This could be an example of MongoDB rollbacks, which is a fancy way of saying “data loss”.

Quote for the day:

"If you want someone to develop a specific trait, treat them as though they already had it." -- Goethe

Daily Tech Digest - May 23, 2020

A new artificial eye mimics and may outperform human eyes

artificial eyeball illustration
This device, which mimics the human eye’s structure, is about as sensitive to light and has a faster reaction time than a real eyeball. It may not come with the telescopic or night vision capabilities that Steve Austin had in The Six Million Dollar Man television show, but this electronic eyepiece does have the potential for sharper vision than human eyes, researchers report in the May 21 Nature. “In the future, we can use this for better vision prostheses and humanoid robotics,” says engineer and materials scientist Zhiyong Fan of the Hong Kong University of Science and Technology. The human eye owes its wide field of view and high-resolution eyesight to the dome-shaped retina — an area at the back of the eyeball covered in light-detecting cells. Fan and colleagues used a curved aluminum oxide membrane, studded with nanosize sensors made of a light-sensitive material called a perovskite (SN: 7/26/17), to mimic that architecture in their synthetic eyeball. Wires attached to the artificial retina send readouts from those sensors to external circuitry for processing, just as nerve fibers relay signals from a real eyeball to the brain.

Two businessmen with protective face masks are working in the office
Given the clear finding that people with covid-19 can be highly contagious even if they display few or no symptoms, a growing number of companies and health experts argue that reopening plans must also include wide-scale and continual testing of workers. “It’s less a question of if testing becomes a part of workplace strategies, than when and what will prompt that,” says Rajaie Batniji, chief health officer at Collective Health. Measures like temperature checks may even do more harm than good by giving workers and employers a false sense of confidence, he says. The San Francisco company, which manages health benefits for businesses, has developed a product called Collective Go that, among other things, includes detailed health protocols for companies looking to reopen. Developed in partnership with researchers at Johns Hopkins, the University of California, San Francisco, and elsewhere, the guidelines include when and how often workers in various job types and locations should be tested.

How effective security training goes deeper than ‘awareness’

While the approach may be up for debate, its effectiveness is not. Almost 90% of organisations report an improvement in employee awareness following the implementation of a consequence model. The model itself is secondary here. The key takeaway is that time and effort matter. The more hands-on training workers receive, the better they are at spotting phishing attempts. Organisations must strive to develop training programmes that leave employees equipped with the skills to spot and defend against attacks – before anyone is left to face the consequences. The goal of any security training programme is to eradicate behaviours that put your organisation at risk. The best way to achieve this is through a mix of the broad and the granular. Start by cultivating a security-first culture. This means a continuous, company-wide training programme that acknowledges everyone’s role in keeping your organisation safe.

Gaming: A gold mine for datasets

While working on a project, I came across a problem where the object detector that I was using did not recognize all the objects in the image frame. I was trying to index all the objects present in the images frame, which later would make searching of images easier. But all the images are labeled human, not being able to detect the other objects in the image frames, the search was not working as I wanted. The ideal solution for this problem would be to gather data for those objects and re-train the object detector to also identify the new objects. This would not only be boring but time-consuming. I could use GANs, a type of machine learning algorithm famous for its use of creating artificial and similar examples to its inputs, to create more samples after organizing a few samples manually, but this is also boring and will require resources to train the GANs to generate more data. Now the only thing I could do was using internet services, like ScaleAI and Google Cloud Platform, to create a dataset.

Identity Silos: shining a light on the problem of shadow identities

It’s important to stress that identity silos – sometimes referred to as ‘shadow identities’ because, similar to shadow IT, they are created without central organisational approval – come about during routine business expansion. If a business unit wishes to roll out a new digital service, in the absence of an existing centralised identity management function that can do the job, they often end up either buying an off-the-shelf identity and access management (IAM) system or create their own. When a business merges or acquires a new organisation, the new unit often keeps its own IAM infrastructure. In both cases, the result is the same: hundreds of invisible silos of duplicated user identities. The chances are you’ve experienced the problems caused by identity silos. If you use the same broadband, mobile, and television provider, you’ve probably had to update the same information multiple times for each account, rather than just once. Or if you’ve been subjected to marketing calls (even though you’re already a customer!) that try to sell you products you already have. This is all because your customer data is siloed in each department throughout the company, thereby ruling out cohesive customer experiences.

How to overcome AI and machine learning adoption barriers

Many industries have effectively reached a sticking point in their adoption of AI and ML technologies. Typically, this has been driven by unproven start-up companies delivering some type of open source technology and placing a flashy exterior around it, and then relying on a customer to act as a development partner for it. However, this is the primary problem – customers are not looking for prototype and unproven software to run their industrial operations. Instead of offering a revolutionary digital experience, many companies are continuing to fuel their initial scepticism of AI and ML by providing poorly planned pilot projects that often land the company in a stalled position of pilot purgatory, continuous feature creep and a regular rollout of new beta versions of software. This practice of the never ending pilot project is driving a reluctance for customers to then engage further with innovative companies who are truly driving digital transformation in their sector with proven AI and ML technology.

Coronavirus to Accelerate ASEAN Banks’ Digital Transformation

The shift towards a digital-channel strategy is likely to be significantly amplified now that customer preferences are abruptly adjusted. DBS Bank Ltd. (AA-/Rating Watch Negative) has in the past reported that the cost-to-income ratio of its digital customers is roughly 20pp lower than its non-digital banking clients, implying considerable potential productivity to be gained in the longer term should the trend persists. That said, actual investments in IT are likely to be tempered in the near term as banks look to cut overall costs in the face of significant business uncertainty. We believe that the significantly higher adoption rate of digital banking is likely to help more of the well-established, digitally advanced banks to widen their competitive advantage further against the less agile players as well as the incoming digital-only banks in the medium term. Regulators around the region have already extended the deadline for awarding virtual bank licences as a result of the pandemic, which we expect to also weed out weaker, aspiring online banks from competing for the licences.

7 ways to catch a Data Scientist’s lies and deception

In Machine Learning, there is often a trade-off between how well a model performs and how easily its performance, especially poor performance, can be explained. Generally, for complex data, more sophisticated and complicated models tend to do better. However, because these models are more complicated, it becomes difficult to explain the effect of input data on the output result. For example, let us imagine that you are using a very complex Machine Learning model to predict the sales of a product. The inputs to this model are the amounts of money spent on advertising on TV, newspaper and radio. The complicated model may give you very accurate sales predictions but may not be able to tell you which of the 3 advertisement outlets, TV, radio or newspaper, impacts the sales more and is more worth the money. A simpler model, on the other hand, might have given a less accurate result, but would have been able to explain which outlet is more worth the money. You need to be aware of this trade-off between model performance and interpretability. This is crucial because where the balance should lie on the scale of explainability vs performance, should depend on the objective and hence, should be your decision to make.

Supercomputer Intrusions Trace to Cryptocurrency Miners

Attacks against high-performance computing labs, which first came to light last week, appear to have targeted victims in Canada, China, the United States and parts of Europe, including the U.K., Germany and Spain. Security experts say all of the incidents appear to have involved attackers using SSH credentials stolen from legitimate users, which can include researchers at universities and consortiums. SSH - aka Secure Shell or Secure Socket Shell - is a cryptographic network protocol for providing secure remote login, even over unsecured networks. Supercomputer and high-performance clusters offer massive computational power to researchers. But any attackers able to sneak cryptomining malware into such environments could use them to mine for cryptocurrency, referring to solving computationally intensive equations in exchange for potentially getting digital currency as a reward. On May 11, the bwHPC consortium, comprising researchers and 10 universities in the German state of Baden-Württemberg, reported that it had suffered a "security incident" and as a result took offline multiple supercomputers and clusters.

Apache Arrow and Java: Lightning Speed Big Data Transfer

Apache Arrow and Java: Lightning Speed Big Data Transfer
Apache Arrow puts forward a cross-language, cross-platform, columnar in-memory data format for data. It eliminates the need for serialization as data is represented by the same bytes on each platform and programming language. This common format enables zero-copy data transfer in big data systems, to minimize the performance hit of transferring data. ... Conceptually, Apache Arrow is designed as a backbone for Big Data systems, for example, Ballista or Dremio, or for Big Data system integrations. If your use cases are not in the area of Big Data systems, then probably the overhead of Apache Arrow is not worth your troubles. You’re likely better off with a serialization framework that has broad industry adoption, such as ProtoBuf, FlatBuffers, Thrift, MessagePack, or others. Coding with Apache Arrow is very different from coding with plain old Java objects, in the sense that there are no Java objects. Code operates on buffers all the way down. Existing utility libraries, e.g., Apache Commons, Guava, etc., are no longer usable. You might have to re-implement some algorithms to work with byte buffers. And last but not least, you always have to think in terms of columns instead of objects.

Quote for the day:

"If you don't write your own story, you'll be nothing but a footnote in someone else's." -- Peter Shankman

Daily Tech Digest - May 22, 2020

Assessing the Value of Corporate Data

istock 877278574
“Value can be determined in a qualitative way,” says Grasso. This can be done via “a deep analysis of what are the key data the enterprise should harness in order to get a profitable return, [and that depends] on the business model of each organization.” That’s not to say there aren’t incentives, and means, to quantify the value of digital data. “I use a structured approach [by] aggregating a few components of our data’s’ value – intrinsic, derivative, and algorithmic,” says Lassalle at JLS Technology USA. “I use this approach to assess the true value, placing a real dollar amount on what data is worth to help organizations manage the risk around data as their most important asset.” Qualitative or quantitative, the value of data isn’t static. As data ages, for example, its value can wane. Conversely, real-time data is often extremely valuable, as is data supplemented with complementary data from other sources. Most people tend to focus on, and put value on, the data that emerges from their own operations or other familiar sources, notes Mark Thiele 

Deep Learning Architectures for Action Recognition

We have learned that deep learning has revolutionized the way we process videos for action recognition. Deep learning literature has come a long way from using improved Dense Trajectories. Many learnings from the sister problem of image classification has been used in advancing deep networks for action recognition. Specifically, the usage of convolution layers, pooling layers, batch normalization, and residual connections have been borrowed from the 2D space and applied in 3D with substantial success. Many models that use a spatial stream are pretrained on extensive image datasets. Optical flow has also had an important role in representing temporal features in early deep video architectures like the two stream networks and fusion networks. Optical flow is our mathematical definition of how we believe movement in subsequent frames can be described as densely calculated flow vectors for all pixels. Originally, networks bolstered performance by using optical flow. However, this made networks unable to be end-to-end trained and limited real-time capabilities. In modern deep learning, we have moved beyond optical flow, and we instead architect networks that are able to natively learn temporal embeddings and are end-to-end trainable.

The Road to Wi-Fi 6

we are more dependent on the network than ever before and Wi-Fi 6 gives you more of what you need. It is a more consistent and dependable network connection that will deliver speeds up to four times faster than 802.11ac Wave 2 with four times the capacity. This standard provides a seamless experience for clients and enables next-generation applications such as 4K/8K streaming HD, augmented reality (AR) and virtual reality (VR) video, and more device and IoT capacity for high-density environments such as university lecture halls, malls, stadiums, and manufacturing facilities. Wi-Fi 6 also promises reduced latency, greater reliability, and improved power efficiency. With higher performance for mobile devices and the ability to support the Internet of Things (IoT) on a massive scale (IoT use has been trending upwards lately and is now also called “the new mobile”), Wi-Fi 6 will improve experiences across the entire wireless landscape. Wi-Fi 6 also offers improved security, with WPA3 and improved interference mitigation with better QoE.

Increasing Software Velocity While Maintaining Quality and Efficiency

Software velocity, continuous improvement
Throughout the software lifecycle there are many opportunities for automation — from software design, to development, to build, test, deploy and, ultimately, the production state. The more of these steps that can be automated, the faster developers can work. Perhaps the biggest area for potential time savings is testing, because of all the phases of the software lifecycle, testing assumes the most manual labor. The Vanson Bourne survey found test automation to be the single most important factor in accelerating innovation, according to 90% of IT leaders. ... Integration helps software development technologies interoperate with each other. An integrated pipeline leverages existing investments and enables developers to build, test and deploy faster while providing the additional benefit of reducing errors resulting from human intervention (furthering quality). Integration also decreases the manual labor needed to execute and manage workflow within software delivery processes.

Every Data Scientist needs some SparkMagic

Spark is the data industries gold standard for working with data in distributed data lakes. But to work with Spark clusters cost-efficiently, and even allow multi-tenancy, it is difficult to accommodate individual requirements and dependencies. The industry trend for distributed data infrastructure is towards ephemeral clusters which makes it even harder for data scientists to deploy and manage their Jupyter notebook environments. It’s no surprise that many data scientists work locally on high-spec laptops where they can install and persist their Jupyter notebook environments more easily. So far so understandable. How do many data scientists then connect their local development environment with the data in the production data lake? They materialise csv files with Spark and download them from the cloud storage console. Manually downloading csv files from cloud storage consoles is neither productive nor is it particularly robust. Wouldn’t it be so much better to seamlessly connect a local Jupyter Notebook with a remote cluster in an end-user friendly and transparent way? Meet SparkMagic! Sparkmagic is a project to interactively work with remote Spark clusters in Jupyter notebooks through the Livy REST API. It provides a set of Jupyter Notebook cell magics and kernels to turn Jupyter into an integrated Spark environment for remote clusters.

ML.NET Model Builder is now a part of Visual Studio

ML.NET is a cross-platform, machine learning framework for .NET developers. Model Builder is the UI tooling in Visual Studio that uses Automated Machine Learning (AutoML) to train and consume custom ML.NET models in your .NET apps. You can use ML.NET and Model Builder to create custom machine learning models without having prior machine learning experience and without leaving the .NET ecosystem. ... Model Builder’s Scenario screen got an update with a new, modern design and with updated scenario names to make it even easier to map your own business problems to the machine learning scenarios offered. Additionally, anomaly detection, clustering, forecasting, and object detection have been added as example scenarios. These example scenarios are not yet supported by AutoML but are supported by ML.NET, so we’ve provided links to tutorials and sample code via the UI to help get you started.

He's about to steal some code and write a game changing application.
Speaking of which, libraries and external dependencies are an efficient way to reuse functionality without reusing code. It’s almost like copying code, except that you aren’t responsible for the maintenance of it. Heck, most of the web today operates on a variety of frameworks and plugin libraries that simplify development. Reusing code in the form of libraries is incredibly efficient and allows each focused library to be very good at what it does and only that. And unlike in academia, many libraries don’t even require anything to indicate you’re building with or on top of someone else’s code.  The JavaScript package manager npm takes this to the extreme. You can install tiny, single function libraries—some as small as a single line of code—into your project via the command line. You can grab any of over one million open source packages and start building their functionality into your app. Of course, as with every approach to work, there’s downside to this method. By installing a package, you give up some control over the code. Some malicious coders have created legitimately useful packages, waited until they had a decent adoption rate, then updated the code to steal bitcoin wallets.

Security & Trust Ratings Proliferate: Is That a Good Thing?

The information security world is rapidly gaining its own sets of scoring systems. Last week, NortonLifeLock announced a research project that will score whether Twitter accounts are likely to belong to a human or a bot. Security awareness companies such as KnowBe4 and Infosec assign workers grades for how well they perform on phishing simulations and the risk that they may pose to their employer. And companies such as BitSight and SecurityScorecard rate companies using external indicators of security and potential breaches. Businesses will increasingly look to scores to evaluate the risk of partnering with another firm or even employing a worker. A business's cyber-insurance security score could determine its cyber-insurance premium or whether a larger client will work with the firm, says Stephen Boyer, CEO at BitSight, a security ratings firm. "A lot of our customers will not engage with a vendor below a certain threshold because they have a certain risk tolerance," he says. "Business decisions are absolutely being made on this."

Microsoft launches Project Bonsai, an AI development platform for industrial systems

Microsoft Project Bonsai
Project Bonsai is a “machine teaching” service that combines machine learning, calibration, and optimization to bring autonomy to the control systems at the heart of robotic arms, bulldozer blades, forklifts, underground drills, rescue vehicles, wind and solar farms, and more. Control systems form a core component of machinery across sectors like manufacturing, chemical processing, construction, energy, and mining, helping manage everything from electrical substations and HVAC installations to fleets of factory floor robots. But developing AI and machine learning algorithms atop them — algorithms that could tackle processes previously too challenging to automate — requires expertise. ... Project Bonsai is an outgrowth of Microsoft’s 2018 acquisition of Berkeley, California-based Bonsai, which previously received funding from the company’s venture capital arm M12. Bonsai is the brainchild of former Microsoft engineers Keen Browne and Mark Hammond, who’s now the general manager of business AI at Microsoft.

Hacked Law Firm May Have Had Unpatched Pulse Secure VPN

Mursch says that while his firm can scan open internet ports for vulnerable Pulse Secure VPN servers, he doesn't have insight into the law firm's internal network and can't say for sure whether the REvil operators used it to plant ransomware and encrypt files. Some security experts, including Kevin Beaumont, who is now with Microsoft, have previously warned that the REvil ransomware gang is known to target unpatched Pulse Secure VPN servers. When the gang attacked the London-based foreign currency exchange firm Travelex on New Year's Day, it was reported that the company used a Pulse Secure VPN server that was patched. Brett Callow, a threat analyst with security firm Emsisoft, also notes that REvil is known to use vulnerable Pulse Secure VPN servers to gain a foothold in a network and wait for some time before starting a ransomware attack against a target. "In other incidents, it's been established that groups have had access for months prior to finally deploying the ransomware," Callow tells ISMG.

Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - May 21, 2020

Unlocking Enterprise Blockchain Potential with Low-Code Capabilities

Low-code development platforms allow enterprises to reap the benefits of complex code, without the need to dedicate valuable time and resources toward development from the ground-up. “Plug and play” customization allows them to address specific needs within their organization, and prioritize implementation on a smaller scale without the stress of diving head-first into an infrastructural overhaul. Especially during our ongoing COVID-19 crisis, low-code eliminates the need for large dev teams to develop new software applications, allowing for a streamlined, timely transition as organizations dedicate their valuable resources elsewhere to help minimize the negative impact of COVID-19 on their workforce and their surrounding communities. Beyond this epidemic, these benefits provide risk-averse C-level decision makers with an easy and confident investment opportunity, as well as disruptive tools that deliver on the growing need for constant innovation — in an era where agility and digital transformation are now a necessity.

Phishing Attack Bypassed Office 365 Multifactor Protections

Phishing Attack Bypassed Office 365 Multifactor Protections
The phishing attack started with an email that contains a malicious link that's designed to look like a SharePoint file, according to the report. The message in the email noted that the file relates to bonuses for the quarter - an effective lure to get a victim to click. If a targeted victim clicked the link, they were taken to the legitimate Microsoft Office 365 login page. But the URL had been subtly changed by the attackers to manipulate the authentication process. To log in to Office 365, a user typically needs permission from the Microsoft Graph authentication process and a security token from the Microsoft Identity Platform. This is where the OAuth 2.0 framework, which grants a user limited access to their resources from one site to another, and the OpenID Connect protocol, which helps devices verify a user, came into play in the scam. These are designed to allow a user to log in without exposing credentials, according to the report. The altered URL contained parameters that captured the security tokens and other authentication data and then sent that information back to the attackers. In one example, Cofense found a "redirect" parameter in the URL that sent authentication data to a domain hosted in Bulgaria.

Ionic vs. Xamarin

In the ordinary world of web development, applying custom styling is relatively easy. Just port your existing components over to your new project, or apply the specific CSS edits that you need to make your app look and feel the way you want it to. But in the mobile world, this becomes a lot harder. For example, Xamarin Native uses only the native components available on iOS and Android. You won’t be able to just copy over your existing component library, and the styling and theming options are extremely limited. This is where Ionic’s approach is most valuable. Ionic UI components are just Web Components. By default, they are designed to look and feel native to iOS and Android; but under the hood, they’re just Web Components. If you already have a React or Web Component library, you can easily port those over to your mobile project. Or, you can edit any aspects of the UI using CSS, just as you would for any web project. This level of design customizability is unparalleled in the world of mobile app development.

How Agile Can Work Together with Deadlines

When attempting to soften arbitrary deadlines, your stakeholder relationships are key. Often, the drivers behind a fixed deadline are a lack of detail, context, and trust. For stakeholders to trust that they are going to get something delivered, and more than that something that is valuable delivered, you have to look to build up that understanding and that trust. Once you have built that up, you are also more likely to gain flexibility in your delivery timelines. At Loyalty, we made sure that we had regular open dialogues with a wide stakeholder group via weekly demos. We talked through the challenges, showed off what had been worked on that week, and acted as a source of truth on our progress. This avoided rumours or corridor chat that can undermine delivery if stakeholders are getting a mixed message. The demos not only built trust, but also removed any rumours; we were regularly available for questions and to have an open dialogue. The other key factor that I have already alluded to is frank conversations.

Why the economic recovery post COVID-19 is not doom and gloom for tech talent

The economic recovery post COVID-19 is not doom and gloom for tech talent image
There is no doubt that the recovery will be a long road ahead, but as we look to the future there are some promising signs about the market for STEM talent. Our data suggests that the demand for contract placements has remained intact. Even in markets such as the US and UK, while there has been some drop-off in the volume of candidates placed, demand for contract placements has continued to remain consistent throughout March and April - because employers still need the right talent, but now more than ever they also need a flexible hiring approach that enables them to fill talent gaps on an ‘as-needed’ basis. Employers are also telling us that they will have significant talent gaps to fill upon an eventual recovery the economy. This demand for quick access to talent could in turn be turbocharged by tech - employers will be much more open to shifting to remote working if it means widening their talent pools to meet urgent business demand. The days of candidates needing to be localised to their employer may be gone for good in several sectors - many are now saying that they see the shift to remote, flexible working becoming entrenched within their industry as a lasting change.

The Need for Compliance in a Post-COVID-19 World

US and UK cybersecurity officials warn that state-backed hackers and online criminals are taking advantage of people's anxiety over COVID-19 to lure them into clicking on links and downloading attachments in phishing emails that contain malware or ransomware. Corporate networks could also be vulnerable to attacks if companies do not invest in providing their employees secure company laptops and set up virtual private networks (VPNs) or zero-trust access solutions. With all of this upheaval, business leaders need to keep their guard up. It's easy to lose focus and push off implementing security measures, managing risk, and keeping up with compliance requirements. But this would be a big mistake. Regulatory requirements are designed to ensure that organizations establish a solid cybersecurity program — and then monitor and update it on an ongoing basis. It's critical that organizations continue to stay compliant with applicable security standards and guidelines, especially those concerning policies and procedures, business continuity planning, and remote workers.

On Being (and Becoming) a Data Scientist

The discipline of data science includes a set of technical skills with broad applications that have grown in demand with the advent of “Big Data”. Data science now has too many use cases to count: epidemiology, pharmaceuticals, finance and banking, media and advertising. Even ‘Money Ball’. We are needed most everywhere. The number of applications is both a blessing and a curse, however. As data scientists, we may understand the challenges at work in technical terms but lack an understanding of the broader context important to comprehending and solving problems in a meaningful, practical way. In establishing and building a career as a data scientist, domain matters. Unless you’re an industry expert who becomes a data scientist along the way, it takes time to be of use. We learn as we go, off and on, and not just when it comes to the stack, finding our way around the data. At some point, you’ll have to figure out whether the industry you’re in is something of interest to you beyond data science (unless, that is, it picks you). That’s the big, fundamental question.

There is a common misconception that remote workers won’t build strong relationships and company productivity will suffer as a result. The good news is this doesn’t appear to be true. In a remote world, bonding may take longer, but it does happen and can even “reach levels present in face-to-face communication,” according to a 2013 study published in Cyberpsychology. In fact, remote communication could actually be better for business, because it can bring a team closer together. “For strangers meeting for the first time, digital communication has been shown to enhance the intimacy and frequency of self-disclosure,” according to the researchers. They noted that “strangers meeting in text-based environments show higher affinity for one another than strangers meeting one another face to face.” Perhaps more importantly, study participants reported the same level of bonding after video chats as they did after in-person interactions. The level of bonding did decrease, however, with audio and instant message communication.

Using the 'Zero Trust' Model for Remote Workforce Security

An essential component of the zero trust model is verifying devices from where data is getting accessed using technologies such as CASB and Web DLP. "If an employee is accessing my database through a personal device, the zero trust approach helps me check the device security posture," Khanna says. "Only after these verifications is the device allowed to access the database." Gary Hayslip, director of information security at SoftBank Investment Advisers in California, says the zero trust approach fits his company's 100% cloud approach. "For us it was all about having a proper control over access. We wanted to have a control and know about who is accessing what kind of data," Hayslip says. "Now, whether workers are travelling or at home, we know the device, we know the user, we know the geo location and we know what data the user accessed." When building a zero trust framework, Panchal says, it's critical to "capture every physical and digital footprint of the users' access to the enterprise applications and services using AI on top of every log to understand the user behavior in the system and grant access accordingly.

Microsoft supercomputer looks to create AI models in Azure

While launching into the supercomputer market could give Microsoft's overall AI initiative a boost, one consultant said Microsoft still trails a few competitors, such as Google, in terms of general AI innovation. The best way for Microsoft to catch up is with a series of acquisitions of smaller AI companies. "Microsoft has made some acquisitions in this [AI] space, but they are still playing catch-up," said Frank Dzubeck, president of Communications Network Architects in Washington, D.C. "They are still focusing on application-specific algorithms for certain industries. They have made some headway but aren't there yet where the Googles of the world are." There will be a "changing of guard" in the AI market, Dzubeck said, led by a raft of both known and unknown fledgling AI companies, similar to what happened in the world of social networking 10 and 15 years ago. It is from among these companies that Microsoft, through acquisitions, will grow its fortunes in the AI market, he predicted.

Quote for the day:

"To get a feel for the true essence of leadership, assume everyone who works for you is a volunteer." -- Kouzes and Posner