Daily Tech Digest - April 15, 2020

Weekly health check of ISPs, cloud providers and conferencing services

thousandeyes map
Outages for ISPs globally were down 9.13% during the week of March 30 from the week before, whereas U.S. outages were down 16.7%, dropping from 120 to 100. Worldwide the outages were also down, from 252 to 229. Public cloud outages rose worldwide from 22 to 25, and in the U.S. there was one outage, up from zero the previous week. Outages for collaboration apps rose dramatically, increasing more than 260% globally and more than 500% in the U.S. over the week before. The actual numbers were an increase from eight to 29 worldwide, and up from 4 to 25 in the U.S. ... During the week April 6-Apri 12, service outages for ISPs, cloud providers, and conferencing services dropped overall. They went from 298 down to 177 globally (40%, a six-week low), and in the U.S. dropped from 129 to 72 (44%). Globally, ISP outages were down from 229 to 141 (38%), and in the U.S. were down from 100 to 56 (44%). Cloud provider outages were also down overall from 25 to 19 (24%), ThousandEyes says, but jumped up from one to six (500%) in the U.S., which saw the highest rate of increase in seven weeks. Even so, the U.S. total was relatively low. “Again, cloud providers are doing quite well,” ThousandEyes says.


A Smattering of Thoughts About Applying Site Reliability Engineering principles

Google has a lot more detail on the principles of “on-call” rotation work compared to project-oriented work. Life of An On-Call Engineer. Of particular relevance is mention of capping the time that Site Reliability Engineers spend on purely operational work to 50% to ensure the other time is spent building solutions to impact the automation and service reliability in a proactive, rather than reactive manner. In addition, the challenges of operational reactive work and getting in the zone on solving project work with code can limit the ability to address the toil of continual fixes. Google's SRE Handbook also addresses this in mentioning that you should definitely not mix operational work and project work on the same person at the same time. Instead whoever is on call for that reactive work should focus fully on that, and not try to do project work at the same time. Trying to do both results in frustration and fragmentation in their effort. This is refreshing, as I known I've felt the pressure of needing to deliver a project, yet feeling that pressure of reactive work with operational issues taking precedence.


Coronavirus: Zoom user credentials for sale on dark web


Analysis of the database found that alongside personal accounts belonging to consumers, there were also corporate accounts registered to banks, consultancies, schools and colleges, hospitals, and software companies, among many others. IntSights said that whilst some of these accounts only included an email and password, others included Zoom meeting IDs, names and host keys. “The more specific and targeted the databases, the more it's going to cost you. A database of random usernames and passwords is probably going to go pretty cheap because it's harder to utilise,” Maor told Computer Weekly. “But if somebody says they have a database of Zoom users in the UK the price is going to get much higher because it's much more specific and much easier to use.” Whilst it is not uncommon at all for usernames and passwords to be shared or sold, Maor said that some of the discussions that followed had been intriguing, with the sale spawning a number of different posts and threads discussing different approaches to targeting Zoom users, many of them focused on credential stuffing attacks.


Remote work will be forever changed post-COVID-19

The problem with these two competing visions is that they assume we'll return to an extreme version of a pre-COVID-19 scenario, either doubling down on traditional remote working arrangements, or spending even more time traveling and sitting in offices, working the way we always did before the virus. I believe that the key lessons many of us will take from this period of enforced remote work are less about location, and more about time and work management. One thing I noticed and confirmed with several colleagues early in my COVID-19 experience was that productive video conferences were mentally more exhausting than an equivalent in-person meeting. A two-hour workshop over videoconference had the same mental drain as an all-day affair in an in-person meeting, especially for the presenters and facilitators. The medium seems to force more intense interactions, and more planning to successfully orchestrate. Collaborating in the same physical space was the pre-COVID-19 norm since it was easy.


Comparing Three Approaches to Multi-Cloud Security Management


IaC is a second approach to multi-cloud management. This approach arose in response to utility computing and second-generation web frameworks, which gave rise to widespread scaling problems for small businesses. Administrators took a pragmatic approach: they modeled their multi-cloud infrastructures with code, and were therefore able to write management tools that operated in a similar way to standard software. IaC sits in between the other approaches on this list, and represents a compromise solution. It gives more fine-grained control over cloud management and security processes than a CMP, especially when used in conjunction with SaaS security vendors whose software can apply a consistent security layer to a software model of your cloud infrastructure. This is important because SaaS is growing rapidly in popularity, with 86% of organizations expected to have SaaS meeting the vast majority of their software needs within two years.  On the other hand, IaC requires a greater level of knowledge and vigilance than either CMP—or cloud-native approaches.


DevOps implementation is often unsuccessful. Here's why

The primary feature of DevOps is, to a certain extent, the automation of the software development process. Continuous integration and continuous delivery (CI/CD) principles are the cornerstones of this concept, and as you likely know, are very reliant on tools. Tools are awesome, they really are. They can bring unprecedented speed to the software delivery process, managing the code repository, testing, maintenance, and storage elements with relatively seamless ease. And if you’re managing a team of developers in a DevOps process, these tools and ​the people who use them are a vital piece of the puzzle​ in shipping quality software. However, while robots might take all our jobs and imprison us someday, they are definitely not there yet. Heavy reliance on tools and automation leaves a window wide open for errors. Scans and tests may not pick up everything, code may go unchecked, and that presents enormous quality (not to mention, security) issues down the track. An attacker only needs one back door to exploit to steal data, and forgoing the human element in quality and security control can have disastrous consequences.


Videoconferencing quick fixes need a rethink when the pandemic abates

tech spotlight collaboration nww by metamorworks gettyimages 1154341603 3x2 2400x1600
A tier down from its immersive telepresence big brother is the multipurpose conference room. Inside offices, companies have designated multipurpose rooms, equipped more minimally with videoconferencing equipment. Instead of spending big bucks on devoting an entire room, with all of the bells and whistles, to an immersive telepresence system, why not outfit a conference room with enough cameras, screens and microphones to offer a good virtual meeting experience, while still leaving the room to be used for general meetings? These multipurpose rooms generally cost a few thousand dollars to outfit with a camera, a microphone array and maybe some integrated digital whiteboards, and a PC or iPad as a control mechanism, Kerravala says. It's a lot more affordable, but a multipurpose conference room still is bandwidth intensive. And it's likely to be tapping bandwidth on the shared network – instead of having its own pipe, as an immersive room would – and that needs to be taken into consideration in network capacity planning.



Information Age roundtable: Harnessing the power of data in the utilities sector

When it comes to data usage across the company, a major aspect to be considered is the trust that is placed in employees. For Graeme Wright, chief digital officer, manufacturing, utilities and services at Fujitsu UK, “data is only trusted with certain people. “Sometimes, it goes across organisational boundaries, because of the third-party suppliers that people are using, and I don’t know if people have really been really incentivised to exploit the value of that data.” Wright went on to explain that the field force “need a different method of interacting to make sure that the data flows freely from them into the actual centre so we can actually analyse it and understand what’s going on”. Steven Steer, head of data at Ofgem, also weighed in on this issue: “This is really central to the energy sector’s agenda over the last year or so. The Energy Data Task Force, an independent task force, published its findings in June, and one of the main findings was the presumption that data is open to all, not just within your own organisation. 



At first glance, low-code and cloud-native don’t seem to have much to do with each other — but many of the low-code vendors are still making the connection. After all, microservices are chunks of software code, right? So why hand-code them if you could take a low-code approach to craft your microservices? Not so fast. Microservices generally focus on back-end functionality that simply doesn’t lend itself to the visual modeling context that low-code provides. Furthermore, today’s low-code tools tend to center on front-end application creation (often for mobile apps), as well as business process workflow design and automation. Bespoke microservices are unlikely to be on this list of low-code sweet spots. It's clear from the definition of microservices above that they are code-centric and thus might not lend themselves to low-code development. However, how organizations assemble microservices into applications is a different story. Some low-code vendors would have you believe that you can think of microservices as LEGO blocks that you can assemble into applications. Superficially, this LEGO metaphor is on the right track – but the devil is in the details.


Graph Knowledge Base for Stateful Cloud-Native Applications

As a rule, stateless applications do not persist any client application state between requests or events. “Statelessness” decouples cloud-native services from client applications to achieve desired isolation. The tenets of microservice and serverless architecture expressly prohibit retention of session-state or global-context. However, while the state doesn’t reside in the container, it still has to live somewhere. After all, a stateless function takes state as inputs. Application state didn’t go away, it moved. The trade-off is that state, and with it any global-context, must be re-loaded with every execution. The practical consequence of statelessness is a spike in network usage, which results in chatty, bandwidth and I/O intensive, inter-process communications. This comes at a price – in terms of both increased Cloud service expenditures, as well as latency and performance impacts on Client applications. Distributed computing had already weakened the bonds of data-gravity as a long-standing design principle, forcing applications to integrate with an ever-increasing number of external data sources. Cloud-native architecture flips-the-script completely - data ships to functions.



Quote for the day:


"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis


Daily Tech Digest - April 14, 2020

Microsoft and Google delay online authentication change


Companies are gradually replacing this method with more modern protocols. Microsoft and Google are both shifting to OAuth 2.0, which uses tokens to authenticate applications with online services, and gives them an expiry date. That way, an application stays authorised for a predefined period, minimising the need to exchange credentials. This also makes it easier to implement multi-factor authentication (MFA). Microsoft announced that it would switch off Basic Authentication in its Exchange Web Services (EWS) API for Office 365 back in July 2018. It planned to turn off support for the feature entirely on 13 October 2021. At the same time, it also advised developers to begin moving away from this API, instead using Microsoft Graph, which is its newer API for accessing back-end cloud services such as Exchange Online. It also expanded those plans in September 2019, announcing that it would turn off Basic Authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP and Remote PowerShell.



How to achieve agile DevOps: a disruptive necessity for transformation

How to achieve agile DevOps: a disruptive necessity for transformation image
Organisations have accept that the transition to agile DevOps is going to be disruptive, but entirely necessary for effective and sustainable transformation. According to Erica Langhi, EMEA senior solutions architect at Red Hat, “the best way to mitigate this disruption is through transparency and openness — businesses need to make the benefits of this transition clear to their teams. After that, they should encourage their developers and operations teams to look at how other parts of the business are working.” After this, leaders will need to look at the company’s culture “and start making the tweaks necessary to promote collaboration and communication between teams; this isn’t optional, as nine out of ten organisations that try to make the change to DevOps without changing their culture and structure will fail,” she advised. Overall, to create a maximally agile DevOps, organisation’s “should also invest in a few other technologies and cultural changes. DevOps in fact brings together people, processes, and technology for better efficiency. ...” Langhi continued.


Defining the Database Requirements of Dynamic JAMstack Applications


To understand why multi-region distribution is desirable, let’s revisit why static websites on CDNs are incredibly fast. A CDN is fast to deliver your content because it contains copies of your content at different locations. When content is requested from the CDN from a specific location, it will attempt to deliver that content from the closest location to the requestor. In order to get an idea of how much that matters, take a glance at the Zeit CDN status page which shows you the difference in latency between your current location and other locations. By deploying our applications to a CDN, our pages automatically load from the closest location to the user, which results in low loading latencies. And low latencies result in a great user experience. In order to keep this user experience, the dynamic data that will be loaded from our APIs has to exhibit low latencies as well, and the best way to achieve this is to use a distributed database.


Talking Digital Future: Blockchain Technology

Talking Digital Future: Blockchain Technology
Indeed, the United Nations World Food Program, for example, is serving an incredibly large number of people. And we want the highest amount of good resources to go to those people — so they are. The U.N. did a first round of experimentation on blockchain so it could track the flow of aid from source to destination, and it was very successful. Now, it’s in the second or third round of expanding it. I think I like this technology because it directly and positively impacts human beings. This is probably one of my favorite cases at the moment. Another one is the real estate registries. Very often these are paper-based. I think about New Orleans when Hurricane Katrina came a few years ago. The city was flooded, and it was a complete disaster. It was a terrible tragedy. When the water subsided and the city was getting back on its feet, lots of houses were destroyed and the city had to find the titles for the homes. Well, they were destroyed because they were in boxes and the papers were in the basement of a building that was flooded. So, they had a lot of difficulty for a very long time identifying which properties belonged to who, and then how they could sell the properties.


Edge computing vs. cloud computing: What's the difference?

CIO Edge computing myths
Real-time performance is one of the main reasons for using an edge computing architecture, but not the only one. Edge computing can also help prevent overloading network backbones by processing more data locally and sending to the cloud only data that needs to go to the cloud. There could also be security, privacy, and data sovereignty advantages to keeping more data close to the source rather than shipping it to a centralized location. There are plenty of challenges ahead for edge computing, however. A recent Gartner report, How to Overcome Four Major Challenges in Edge Computing, suggests “through 2022, 50 percent of edge computing solutions that worked as proofs of concept (POCs) will fail to scale for production use.” Those who pursue the promise of edge computing need to be prepared to tackle all the usual issues associated with technologies that still need to prove themselves – best practices for edge system management, governance, integration, and so on have yet to be defined.


Enterprises regard the cloud as critical for innovation, but struggle with security


Only a little over half (58%) said their organization has clear guidelines and policies in place for developers building applications and operating in the public cloud. And of those, 25% said these policies are not enforced, while 17% confirmed their organization lacks clear guidelines entirely. “Enterprises believe they must choose between innovation and security—a false choice we see manifested in the results of this report, as well as in conversations with our customers and prospects,” said Brian Johnson, CEO at DivvyCloud. “Only 35% of respondents do not believe security impedes developers’ self-service access to best-in-class cloud services to drive innovation—meaning 65% believe they must choose between giving developers self-service access to tools that fuel innovation and remaining secure. “The truth is, security issues in the cloud can be avoided. By employing the necessary people, processes, and systems at the same time as cloud adoption, enterprises can reap the benefits of the cloud while ensuring continuous security and compliance.”


Developers: Getting ahead is about more than programming languages


From a career perspective, IT professionals will often reach a point where they have to choose between becoming a technical specialist or moving down the management path. But even for those on the management path it is incredibly important that they stay up to date with what is new in tech as it becomes all too easy to fall out of step, he said. Gill says another trend within the IT industry is for companies to become more customer-focused in how they develop their products and services. In light of this, ambitious IT professionals must develop an understanding of the clients' needs as well as the intricacies of the code. "They should discuss requirements directly with them where possible or else with their points of contact within their own organisation, such as sales or business development. Having direct feedback and input from clients means the IT professionals will have a far greater chance of delivering something that will meet their needs," says Gill. Malcolm Lowe, head of IT at Transport for Greater Manchester (TfGM), is another tech chief who believes focusing on the needs of the user is the key to career-development success. He advises other IT professionals to couch everything they do in business outcomes and user needs – because, at the end of the day, that's what you're providing.


How to build a DevSecOps strategy

How to build a DevSecOps strategy image
Almost every DevOps guide talks about implementing the practice at a cultural level, and the same is true with DevSecOps. Developers tend to be incredibly creative and talented people who take a lot of pride in what they do. Get out of their way and allow them to grow. Think of it as future-proofing your security design through a more holistic approach. That’s precisely why the first step on this list is training and educating team members. When given a chance, they will work to further their skills and experience. They will also take everything they learn and incorporate it into the code and content they’re creating. It’s all about giving them the tools they need to succeed, which will only further improve the end product. ... Most likely, there are projects and segments already in place, and your teams created existing code with a different method. Don’t look at this as a negative or obstacle. It provides an excellent opportunity to revisit the foundations of a system to implement the protective armour we’re discussing.


As cybersecurity concerns grow, so does need for security professionals


For people who already work in IT but choose to refocus their energies in the area of cybersecurity, the switch can be lucrative. Job-market analytics company Burning Glass Technologies has been tracking the cybersecurity job market since 2013. In its June 2019 report, it states that the number of cybersecurity job postings has grown 94% since 2013, compared to only 30% for IT positions overall. This growth is three times faster than the overall IT market. Burning Glass’s research shows that cybersecurity jobs account for 13% of all IT jobs. On average, however, cybersecurity jobs take 20% longer to fill than other IT jobs and pay 16% more. This works out to an average of $12,700 more per year. According to the U.S. Bureau of Labor Statistics, the average salary for an information security analyst is $98,350. Analysts plan and carry out security measures to protect an organization’s computer networks and systems. “Their responsibilities continually expand as the number of cyberattacks increases,” Li says.


What Is A Data Passport: Building Trust, Data Privacy And Security In The Cloud


Data passport technology is based on classic mainframe technology, which today, can include full encryption of your data, to ensure that every piece of data is encrypted. When each piece of data is encrypted, even if it is stolen, it can’t be used.  Data passports allow you to extend the encryption technology that used to be only available on a physical mainframe to cloud computing. Each piece of data in the cloud has a passport assigned to it, and with the passport, you can verify if the data is misused, if the passport is still valid, etc. These data passports also give companies the ability to protect data and revoke access to it at any time, across a multi-cloud environment. Because the data carries its passport — and its encryption — with it, it will help enterprises secure their data wherever it travels. And that's the most significant development that makes data passports so unique and important: the protection and enforcement of data privacy and security are available on and off any given platform as it travels with the data.



Quote for the day:



"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner


Daily Tech Digest - April 13, 2020

Banks should be cautious with use of AI in cybersecurity
Financial institutions must be prepared however for cybercriminals’ methods countering new defences with continuing evolving means of their own. Instead of executing cyberattacks with the intention of stealing money or making fraudulent payments, cyber criminals may target the machine learning processes, embedding fraudulent mechanics into the way the AI engines work. “One of the big concerns, especially at the regulatory level for the future, is ultimately the underlying data integrity,” Holt says. “So, if the attackers don’t do big enormous payouts immediately but attempt to alter the underlying data, how would that be spotted?” Therein lies the danger for financial services companies which are overly optimistic about the potentials for AI in cybersecurity. Dries Watteyne, head of SWIFT’s cyber fusion centre, urges caution in this area. “When talking about the potential of machine learning, I think we shouldn’t forget everything we achieved to date without it.”


Windows Subsystem for Linux 2 Moving into General Availability

As discussed previously, WSL2 is a change in architecture from WSL 1. Where WSL 1 required a translation layer between the Linux system calls and the Windows NT kernel, WSL 2 ships with a lightweight VM running a full Linux kernel. This VM runs directly on the Windows Hypervisor layer. This kernel includes full system call compatibility and allows for running apps like Docker and FUSE natively on Linux. With this new implementation, the Linux kernel has full access to the Windows file system. This new release brings large improvements to performance especially for interactions that require accessing the file system. According to Craig Loewan, Program Manager at Microsoft, this could be between a 3 to 6 times performance improvement depending on how file intensive the application is. He further mentions that unzipping tarbars could see a 20 times performance increase. With this upcoming new version of Windows 10, currently known as version 2004, Microsoft has indicated that the installation and updating process of WSL2 will be streamlined.


Zoom vs. Microsoft Teams: Video chat apps for working from home, compared


Teams has a similar feel to Slack -- you can talk to team members privately or in specific channels, and you can call attention to the whole group or just an individual with the mention feature.  You can video chat with up to 250 people at once with Teams, or present live to up to 10,000 people. Share meeting agendas prior to a conference, invite external guests to join a meeting, and access past meeting recordings and notes. Meetings can be scheduled in the Teams app or through Outlook. ... The Zoom video conference app works for Android, iOS, PC and Mac. The app offers a basic free plan that hosts up to 100 participants. There are also options for small and medium business teams ($15-$20 a month per host) and large enterprises for $20 a month per host with a 50-host minimum. You can adjust meeting times, and select multiple hosts. Up to 1,000 users can participate in a single Zoom video call, and 49 videos can appear on the screen at once. The app has HD video and audio capabilities, collaboration tools like simultaneous screen-sharing and co-annotation, and the ability to record meetings and generate transcripts.



Creating a Text-to-speech engine with Google Tesseract and Arm NN on Raspberry Pi

The network’s architecture can be divided into three significant steps. The first one takes the input image and then extracts features using several convolutional layers. These layers partition the input image horizontally. For each partition, these layers determine the set of image column features. The sequence of column features is used in the second step by the recurrent layers. The recurrent neural networks (RNNs) are typically composed of long short-term memory (LTSM) layers. LTSM revolutionized many AI applications, including speech recognition, image captioning, and time-series analysis. OCR models use RNNs to create the so-called character-probability matrix. This matrix specifies the confidence that the given character is found in the specific partition of the input image. Thus, the last step uses this matrix to decode the text from the image. Usually, people use the connectionist temporal classification (CTC) algorithm. The CTC aims at converting the matrix into the word or sequence of words that makes sense.


Collaboration answers the call

tech spotlight collaboration ctw intro by  ipopba 887088424 3x2 2400x1600
Senior Reporter Matthew Finnegan, who covers collaboration for Computerworld, addresses the question in the back of everyone's mind: "Remote working, now and forevermore?" Surveys show that the majority of people prefer to work from home — and in organizations that have had mature work-from-home policies for a while, many employees have settled into their new reality as if it's no big deal. The office won't go away overnight. But as long as productivity endures, and as collaboration tools inevitably improve, why not allow people to work wherever they like? Matthew and IDG TechTalk's Juliet Beauchamp discuss these and other possibilities on a special episode of Today in Tech. One thing's for sure: Videoconferencing is proving itself the lifeblood of remote work. But can networks handle it? By all accounts, the public internet and even cloud services have held up remarkably well. Yet as analyst Zeus Kerravala observes in "Videoconferencing quick fixes need a rethink when the pandemic is over," written by Network World contributor Sharon Gaudin, those who return to the office and want to continue Zooming or Webexing could face obstacles.


Why Industries Should Prepare For Mass Blockchain Adoption

Photo:
First and foremost, the token market is likely to be significantly reduced this year, and only the most highly demanded and well-developed projects will remain as the digital assets traded on exchanges that are increasingly being forced to comply with legal requirements. Another change this year will be a gradual transition to turnkey solutions. The idea of blockchain turnkey solutions was first presented by Bitwings, an official blockchain-based solution of the leading Spanish mobile operator Wings Mobile. Its goal was to create the most secure standards for e-devices without compromising the operating system and its performance. To integrate turnkey solutions, companies need to conduct internal research: analysis of the current market and existing problems, and the potential of the blockchain in different sectors. It’s also worth studying the existing centralized and decentralized solutions and deciding how to integrate the solution into production processes without disrupting their performance. The latter is the most important point; it is one that all executive officers should pay attention to. They must consider the most efficient options for integrating blockchain into their working processes.


DDR5 memory promises a significant speed boost

Samsung DRAM
"What's important about DDR5 is the high level of integration it provides," says Jim Handy, principal analyst with Objective Analysis, an analyst firm specializing in the memory market. "The people who defined this spec took advantage of the fact that Moore's Law not only reduces DRAM's price per bit, but it also makes it cheaper to add increasing amounts of powerful logic to the chip. They have artfully used this to improve the CPU-DRAM bandwidth, to move the Memory Wall a little farther out." The Same Bank Refresh is a good example, Handy says. "For DRAM's entire history a chip couldn't provide data while it was being refreshed. Now Same Bank Refresh allows data to be accessed in banks that aren't undergoing refresh. This does a lot to improve data communication." So when will this start to show up? Last year an Intel roadmap was leaked to the hobbyist press that showed Intel was planning to move to DDR5 and PCI Express 5 (completely skipping PCIe v4) in 2021. Micron has begun sampling DDR5, Hynix said it plans to begin volume production at the end of this year, and Samsung plans to start DDR5 production next year.


Don’t Leave “Ethical Tech” Out of Your Digital Transformation Plan

Few organizations and their leaders develop an overall approach to the ethical impacts of technology use—at least not at the start of a digital transformation. In a recent study, just 35 percent of respondents said their organization’s leaders spend enough time thinking about and communicating the impact of digital initiatives on society. But in order to be truly savvy in the age of advanced, connected, and autonomous technologies, leaders must think beyond designing and implementing technologically driven capabilities. They should consider how to do so responsibly from the start. At Deloitte, we see a relationship between a company’s digital and technological progress—in other words, its tech savviness—and its focus on various ethical issues related to technology. Our research found that 57 percent of respondents from organizations considered to be “digitally maturing” say their organization’s leaders spend adequate time thinking about and communicating digital initiatives’ societal impact, compared with only 16 percent of respondents from companies in the early stages of their digital transformation.


Duplication, fragmentation hamper interoperability efforts, impact patient safety


Duplicate records might also contain incomplete or outdated information and can affect the quality of care by forcing clinicians to make care decisions without important information such as recent lab results, allergies and current medications. Back in 2019, Verato and AdVault partnered on a cloud-based patient matching platform which aims to expand secure identity matching so care teams have seamless access to medical records. Patient matching specialist Verato, which has also partnered with healthcare IT security specialist Imprivata, is of the belief that alignment of disparate patient record platforms will help eliminate duplicate records, establish more accurate care histories and improve patient safety. In a 2016 Ponemon Institute survey, 86% of respondents said they witnessed a medical error as a direct result of misidentification, and indicated that 35% of all denied claims are due to misidentification, which can cost hospitals up to $1.2 million a year. "Many systems still do not communicate and store data in disjointed architectures and an upsurge of identifiers continues to be created," Doug Brown, managing partner of Black Book, said in a statement.


COBOL, COVID-19, and Coping with Legacy Tech Debt

Image: Alexander - stock.adobe.com
With a history that stretches back three generations, COBOL was developed for a different breed of compute, Edenfield says. “These were massive machines that did certain things like number crunching,” he says. “It wasn’t fancy.” COBOL was designed to move across multiple machines and frankly to be readable, Edenfield says. “People could learn it quickly and it was easier than an assembly language where you are programming in very cryptic commands.” As new compute demands emerged, programming languages evolved, Edenfield says. Agile development and other modern processes can be more efficient and fundamentally different from how COBOL and other early programming languages were handled. Despite such advances, it is a challenge to escape those legacy roots. “Because COBOL was so prevalent, they can’t get out of it,” he says. “There’s so much of it. It’s running all the backroom, payment processing for all your major financial institutions; all your big companies have it.” It was common for organizations to constantly build up COBOL-based systems for decades, Edenfield says, with the programmers retiring or moving on. “Pretty soon, the people who wrote the systems aren’t there anymore,” he says.



Quote for the day:


"Many people go fishing all of their lives without knowing it is not fish they are after." -- Henry David Thoreau


Daily Tech Digest - April 12, 2020

AI (Artificial Intelligence) Projects: Where To Start?

GUI (Graphical User Interface) concept.
You don’t want to spend time and money on a project and then realize there are legal or compliance restrictions. This could easily mean having to abandon the effort. “First, customer data should not be used without permission,” said Debu Chatterjee, who is the senior director of platform AI engineering at ServiceNow. “Secondly, bias from data should be mitigated. Any model which is a black box and cannot be tested through APIs for bias should be avoided. The risk of bias is present in nearly any AI model, even in an algorithmic decision, regardless of whether the algorithm was learned from data or written by humans.” In the early phases of an AI project, there should be lots of brainstorming. This should also involve a cross-section of people in the organization, which will help with buy-in. The goal is to identify a business problem to be solved. “For many companies, the problem is that they start with a need for technology, and not with an actual business need,” said Colin Priest, who is the VP of AI Strategy at DataRobot. “It reminds me of this famous quote from Steve Jobs, ‘You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to sell it.’”


How to Reduce Remote Work Security Risks

istock 876819100
Employees should remain cautious of downloading random applications or software to avoid malware, viruses, or insecure protocols. If they’re unsure, they should check with IT support or their Security team. Also, remind remote workers to be careful when sharing confidential data. They should use company-issued apps for file sharing, storage of confidential documents, and communication. Let them know this is for their own safety, too, that the company has protective measures around these apps and can monitor for suspicious behavior. Consistently communicate with your employees. Ultimately, keeping everyone informed on how to secure their home technologies and practice security in their everyday lives trumps technologies. Maintain communication in a variety of communication channels, to keep them up-to-date on the latest security threats and how to reduce their risk to their personal, and company information. Make sure your security and IT experts are household names, available for questions and sharing red flags.


Automated Machine Learning Is The Future Of Data Science

Data Science
The objective of autoML is to abbreviate the pattern of trial and error and experimentation. It burns through an enormous number of models and the hyperparameters used to design those models to decide the best model available for the data introduced. This is a dull and tedious activity for any human data scientist, regardless of whether the individual in question is exceptionally talented. AutoML platforms can play out this dreary task all the more rapidly and thoroughly to arrive at a solution faster and effectively.A definitive estimation of the autoML tools isn’t to supplant data scientists however to offload their routine work and streamline their procedure to free them and their teams to concentrate their energy and consideration on different parts of the procedure that require a more significant level of reasoning and creativity. As their needs change, it is significant for data scientists to comprehend the full life cycle so they can move their energy to higher-value tasks and sharpen their abilities to additionally hoist their value to their companies.



How Hyperscale Storage Is Becoming More Accessible

It is a scale-out solution that enables you to scale compute and storage independently. And it's through software-defined storage. So you can pick any client, any server, any network, we can run on a quanta server, HP Dell, we can run with Intel CPU, on AMD, or even on arm. There are two main components that I want to touch on. The first one is the NVMe over TCP. This is basically a standard that we invented together with Facebook, Dell, Intel, and a few others. Today, the standard is already fully ratified. What we have here is a super optimized TCP stack userspace that combined together with the NVMe stack, and gives us the ability to support in a very large data center, thousands of connection thousands of containers in millimeter or virtual environment. The second very important layer is the global FTL. FTL is a flash translation layer. That's the layer you can find in every SSD. It's a very high level during the translation between the logical a transaction A to the storage system to the physical transaction to the flesh, what we have done in lightweights.


COVID-19 is accelerating CI/CD adoption

COVID-19 is accelerating CI/CD adoption
As it turns out, the stakes are much higher given the now pervasive work-from-home arrangements most organizations now embrace. Talking with Rose in a phone interview, he stressed that even after years of DevOps discussion, “You still have a lot of companies that are doing most of their software testing on-prem and behind the firewall. The big installed base remains Jenkins in a proprietary data center.” This wasn’t ideal but it was workable when developers and operations professionals worked in an office environment, within the firewall. In a remote-only situation, getting access to the application development workflow is “tricky,” he stresses, because, in part, there’s no guarantee that you’ll be able to VPN in. And so companies are moving much faster than planned from private data centers to public clouds, in an effort to move workloads to a place where modern CI/CD can happen. “All the timelines have shrunk,” Rose says. Over the last two years companies have realized they need to move faster, but perhaps still struggled to start moving. “Now every company is trying to get apps to be cloud-enabled or cloud-native,” he stresses.


Zoom Promises Geo-Fencing, Encryption Overhaul for Meetings


In response to Citizen Lab's report, Zoom immediately promised to implement geo-fencing to ensure that no keys get routed via China, except for China-based users. Yuan attributed the routing of keys via China to a development error as the company attempted to rapidly scale up to meet a surge of demand, starting in China, where the COVID-19 outbreak began, leading the company to allow much greater, free access to its tool, in part, to support medical professionals. (Free versions typically otherwise have a 40-minute time limit for meetings.) "In February, Zoom rapidly added capacity to our Chinese region to handle a massive increase in demand," Yuan says. "In our haste, we mistakenly added our two Chinese data centers to a lengthy whitelist of backup bridges, potentially enabling non-Chinese clients to - under extremely limited circumstances - connect to them (namely when the primary non-Chinese servers were unavailable). This configuration change was made in February." He says Zoom fixed this problem immediately after learning of it via Citizen Lab. "We have also been working on improving our encryption and will be working with experts to ensure we are following best practices," Yuan says.


DevOps proponent lays it on the line: stop the madness and start automating


The final three steps is where many development teams tend to stumble, Davis says. "The most blissful thing about writing code or doing a complex admin task and so forth is when you get everything in your head, and you can see how everything fits together, and the world disappears, and you know exactly how your org works, and anybody could ask for any change and you can fix things. Developers live for that blissful feeling -- to know everything and fix anything." The catch is, a particular project ends, distractions distract, new projects begin, and time passes, Davis continues. "That disappears out of your working memory right? There may be a day, or a week, or a month delay before you know that you broke something. By the time three weeks has elapsed, you forgot that you even built that thing. And if you remember that you built it, you forget how you built it, you forget exactly why you built it. You can make another change of course, but then it might take you another three weeks until you can get that back to your users." Multiply this by hundreds or even thousands of change requests within a large organization, and it's easy to see how things can go awry. DevOps brings order and flow to this potential madness, and Davis boils it down to a three-step process: development, innovation delivery, and operations.


New machine learning method could supercharge battery development for electric vehicles

New machine learning method could supercharge battery development for electric vehicles
"Computers are far better than us at figuring out when to explore—try new and different approaches—and when to exploit, or zero in, on the most promising ones." The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed—the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last. Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test. By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery.


How Big Data and IoT Are Connected


Sensors upon sensors will crop up in all sorts of technologies if they aren’t already. Gigabytes and terabytes of information will whizz between devices at a frightening speed and big data technologies will work even harder to store, process and take value from the collected yet often unstructured sensory information. End-points from numerous locations will knowingly unlock an almost unlimited amount of data, what happens to that data will be considered by those who work in the IoT and big data industries. The result of this interaction will create two likely winners. Firstly, the businesses that can profit from the information provided, and the end-user who has better information to act on. Ultimately, businesses that are seeking to implement IoT into their products are also seeking greater profits, more productivity, higher efficiency and reduced costs. The development of big data technologies works in favor of IoT companies, with both seeking to strategize the ways in which we see and utilize data sets. As for the customer or end-user, they will (if they aren’t already) benefit from the provision of greater useful information, as well as improved customer service and experiences.


Fotolia_131189299_S Sergey Tarasov
In a related twist, customers will, with no surprise, first call their ISPs whenever there is any connectivity problem. In order to provide service, that means a larger call staff. However, what if the problem is a specific device? Even more complex, what if it’s a specific application being run on the phone? An ISP which can quickly identify the root cause of the issue can either fix its own issues or point the customer towards the appropriate firm to provide service. Doing that efficiently will save enormous amounts of money. Identifying technical issues is a clear use case for AI. The question that needs to be answered is how close to the devices can an AI system run. On the ISP’s services, there’s a distance that can obscure some issues. It would be much better to run AI on an individual home’s modem or, even better, a router. The question becomes the footprint. Even runtime AI has not been known for highly efficient resource usage, and many companies have been working to address that for many IoT applications. One such company addressing the issue for the connected home is Veego. They claim to have AI inference that runs on home routers and modems in order to identify performance issues.



Quote for the day:


"As a leader, you set the tone for your entire team. If you have a positive attitude, your team will achieve much more." -- Colin Powell


Daily Tech Digest - April 11, 2020

Expressing The BIAN® Reference Model For The Banking Industry In The Archimate® Modeling Language


The expression of the BIAN model in ArchiMate has been a joint effort by BIAN and The Open Group, the stewards of the ArchiMate standard. The full details of this mapping can be found in the document “ArchiMate® Modeling Notation for the Financial Industry Reference Model: Banking Industry Architecture Network (BIAN)” published by The Open Group. To explain the use of BIAN in the ArchiMate language, The Open Group has published a case study whitepaper co-authored by one of us (Patrick), which uses the fictitious but realistic Archi Banking Group as an example. In this blog, we want to give you an impression of what this is about, picking and choosing some of the juiciest bits. For the full case study, please refer to the whitepaper. Archi Banking Group is the result of the acquisition of several banks in different countries, as most international banks are nowadays. This has come with the typical challenges of integration and cost control. In particular its fragmented information is becoming a compliance risk and the challenges of ‘open banking’ (e.g. PSD2) are difficult to meet. 



Development Versus QA: Ending the Battle Once and for All


The reason why minimizing blame is the number one priority for QA engineers is that in the QA realm, there is a general acceptance that bugs are always going to make it to production, no matter what. This is something we expect because a 100% guaranteed bug-free product would take years to ship rather than weeks, and would therefore be economically unviable. Since they know there will be problems to deal with no matter what they do, they want to show that they did everything in their power to prevent those problems. Naturally, they want to write as many tests as possible to minimize the risk of bugs that they should have caught. But since it’s impossible to write an infinite amount of tests, they have to prioritize what to test for. A QA team is given no data by which to prioritize what to test, so this prioritization is essentially a guessing game. It may be an educated guessing game based on experience and expertise, but it’s still predicting what users are most likely to do on an application without objective data as to what they really care about and how they really will use the application.



Microsoft Teams Promises Great Video Calls: No More Typing Or Dog Noises

In this photo illustration a Microsoft teams logo is seen...
As reported by Venture Beat, Microsoft has promised AI-enhanced innovations which will be able to suppress background noise – in real time – so your call can continue smoothly. Instead of merely reducing the impact that an air conditioning unit has on the call, Teams will aim to suppress other noises not normally covered, such as doors slamming, over-excited typing on a computer keyboard or my beloved pooch having an inconvenient moment. The keyboard is a case in point. If you’re taking notes during an interview, you ideally don’t want that clickety-clack noise to intrude on the conversation. It’s those noises which aren’t “stationary” as Microsoft says, that are hard to suppress without AI. It takes hundreds of hours of data to work out what’s desirable and what’s not, using audio books to represent voices and then other sources to create those pesky noises. All of which leads to the creation of neural network to start the AI working on the data to sort out what should be heard and what shouldn’t. The power of the cloud can be leveraged to help, providing fast, real-time analysis of what’s going on and deciding what should be heard by the person at the other end of the call and what shouldn’t.



Scientists develop AI that can turn brain activity into text

The system was not perfect. Among its mistakes, “Those musicians harmonise marvellously” was decoded as “The spinach was a famous singer”, and “A roll of wire lay near the wall” became “Will robin wear a yellow lily”. However, the team found the accuracy of the new system was far higher than previous approaches. While accuracy varied from person to person, for one participant just 3% of each sentence on average needed correcting – higher than the word error rate of 5% for professional human transcribers. But, the team stress, unlike the latter, the algorithm only handles a small number of sentences. “If you try to go outside the [50 sentences used] the decoding gets much worse,” said Makin, adding that the system is likely relying on a combination of learning particular sentences, identifying words from brain activity, and recognising general patterns in English. The team also found that training the algorithm on one participant’s data meant less training data was needed from the final user – something that could make training less onerous for patients.


IBM, Open Mainframe Project launch initiative to help train COBOL coders


Despite its age, COBOL is reliable and is still widely used -- there's an estimated 220 billion lines of COBOL still in use today. IBM, one of the founding organizations behind COBOL, continues to offer mainframes compatible with the language. The issue with COBOL now is that there are few programmers left with the skills to maintain legacy COBOL applications. Specifically, state agencies are struggling to find actively working COBOL engineers who can update their unemployment benefit systems to factor in new parameters for unemployment eligibility. To address this skills gap, IBM and Linux Foundation's Open Mainframe Project have launched a new program to help connect states with programmers who have COBOL language skills that are proving key in the push to manage the surging number of unemployment claims nationwide. ... "We've seen customers need to scale their systems to handle the increase in demand and IBM has been actively working with clients to manage those applications," said Meredith Stowell, VP of IBM Z Ecosystem. "There are also some states that are in need of additional programming skills to make changes to COBOL.


World Economic Forum explores blockchain interoperability

blockchain interoperability
Blockchain interoperability is often viewed as a technical challenge, but there’s a lot more to it than that. The WEF divides into the Business, the Platform, and the Infrastructure.  The business aspect encompasses the governance of the blockchain and trust between the two networks, as well as data standardization. To share data, it has to be standardized. But often this homogeneity is focused within a single network as opposed to across networks. Other business aspects include incentives and the legal framework, which can be a bigger challenge across jurisdictions. The platform refers to the blockchain protocol, consensus mechanism, smart contract languages, and how users are authorized and permissioned. And the infrastructure looks at the hosting of servers in hybrid clouds, managed blockchains, and whether there are potentially proprietary components that might hinder interoperability. Different projects that implement interoperability are explored, mostly for public blockchains, include the well-known projects Cosmos and Polkadot. For enterprise blockchain, the WEF referred to Hyperledger Quilt, the open source implementation of Ripple’s Interledger, as well as the Corda Settler.


Cybersecurity officials say state-backed hackers taking advantage of pandemic

Silhouettes of laptop users are seen next to a screen projection of binary code are seen in this picture illustration taken March 28, 2018.
“Bad actors are using these difficult times to exploit and take advantage of the public and business,” Bryan Ware, CISA’s assistant director for cybersecurity, said in a statement. The agencies warned that hackers were also exploiting growing demand for work-from-home solutions by passing off their malicious tools as remote collaboration software produced by Zoom and Microsoft. Hackers are also targeting the virtual private networks that are allowing an increasing number of employees to connect to their offices, the agencies said. ... “Crowdsourced security platforms are built to simultaneously enable a remote workforce and help organizations maximize their security resources while benefiting from the intelligence and insights of a ‘crowd’ of security researchers,” Bugcrowd CEO Ashish Gupta told VentureBeat. “In the current environment, a lot of companies don’t have the required resources to secure and test remote environments where the majority of business is now taking place.”


AIoT and Intelligence on the Edge


Edge intelligence allows a high level of data to be processed and analyzed, and for decisions to be made locally, without being sent to the cloud. Take for example a self-navigating drone, instead of relying on a service hosted on the cloud to tell the drone where to go next, the drone itself is now able to decide its own path in the field, even when connections to cloud hosted services are not reliable. ... For architects and program leads working on such initiatives within the company, it’s mainly a mindset change in regards to how the solution is designed, including capabilities of the devices on the edge and where the decision-making step in a process happens. Feasibility for scenarios such as the drone automatically calculating its own path instead of relying on a cloud-hosted service are now better than before, and a few demos or proof-of -concept attempts could now move many of these stories from the backlog and bring implementation dates forward. While AIoT in its re-imagined, converged form may be new, the two original fields (AI and IoT) that merged to create AIoT are both mature and well into mainstream adoption. 


What do CISOs want from cybersecurity vendors right now?

CISOs cybersecurity vendors
To companies providing cybersecurity solutions, the polled executives advised to avoid sales pitches that involve fear-mongering, to dial down cold calls and emails, and to concentrate on nurturing existing relationships. “Messaging ought to be geared towards impacting an enterprise’s bottom line or community, rather than attempting to fearmonger or stoke panic over a situation already causing CISOs enough anxiety,” YL Ventures explained. “Cybersecurity executives feel quite unanimously about the marketing frenzy and, according to our sources, are compiling a ‘black list’ of vendors guilty of using this tactic.” Companies should concentrate on discovering what they can do to help their existing customers and discussing their customers’ experiences. Not only will this improve customer relations, but also provide helpful information that can inform the vendor’s future plans. Last but not least, vendors should consider making goodwill gestures. “Profiteering off of a world-wide tragedy will do vendors little service in the eyes of prospective customers. 41% of the CISOs we consulted with praised technology companies using their services to help other businesses and advised entrepreneurs to follow in their lead instead,” YL Ventures noted.


Why architecting an enterprise should not be IT-centric


The first and most important reason that architecture should not be IT-centric is the same reason why more and more IT-functions are merged with ‘business functions’. A popular metaphor was (is?) that information should be like water coming out of a faucet. In that metaphor the IT department is responsible for developing IT to deliver the information need from the ‘business’. The business aks for ‘information provisioning’, the IT department delivers. This ‘what — how’ division has been the reason for non-functioning business / IT cooperation in lots of organisations in the past decades. An enterprise in general does not need ‘information’ as such, but it needs resources and technology to execute business processes. The type of technology is not very important from a business perspective. It could be humans doing the job, mechanic or digital technology and mostly it will be a mesh of all these types of technology. As a side remark. Yes, data as a source for doing data intelligence could be seen as a product delivered by an organisational department, but that is only a small part of the totality of digital technology.



Quote for the day:


"Conviction is worthless unless it is converted into conduct." -- Thomas Carlyle


Daily Tech Digest - April 10, 2020

WiFi for Enterprise IoT: Why You Shouldn’t Use It


It’s the job of the local IT team to make sure their enterprise’s IT infrastructure is secure and reliable. Connecting dozens, hundreds, or even thousands of devices to that IT infrastructure poses a high risk to both security and reliability while offering little upside to the IT team. It may be true that your IoT solution will generate immense value for the enterprise to which you’re deploying, but this value is often not to the IT team directly. The local IT team will have other internal requests on their plate, and providing you support so you can deploy your IoT solution will likely be low on their list of priorities. This means that the stakeholders who you need most, due to their understanding of and control over the local WiFi setup, are least incentivized to help you. Let me be clear, I’m not attacking IT teams generally, but I’m pointing out the inherent misalignment of incentives even with the most capable and well-meaning IT teams. ... The lack of end-to-end control means that the success or failure of your IoT solution doesn’t rest solely within your hands. Customers don’t care why their shiny new IoT solution isn’t working and that it’s not your fault, they just care that it isn’t working.



10 Ways to Spot a Security Fraud

The Latin phrase "caveat emptor" has become an English proverb, and for good reason. "Let the buyer beware" is an axiom that nearly all of us are familiar with. Most of us know the phrase in the context of retail purchases. We were taught, or have learned over time, to never take sellers at their word. We must always perform the appropriate research before making a purchase. In security, unfortunately, we must practice a different type of caveat emptor. In recent years, security has become a hot field. And sadly, where there is budget and focus, there are also frauds and deceivers. There is no shortage of people presenting themselves as security experts. Some of them truly are. The rest of them, however, are keen to take advantage of security professionals who haven't yet learned to filter the real security experts from the fakes. ... Honest, hard-working security professionals have no problem emailing or otherwise putting agreements into writing. It's very common for a meeting to result in a follow-on email with minutes and action items.


The CSI Effect Comes to Cybersecurity


The problem is that forensic science is often portrayed as providing definite and irrefutable evidence of proof when the truth is that, outside of DNA analysis, forensic science should only be used as supplementary weight to support an allegation. In reality, forensic science is used relatively sparingly, especially when eye-witness, circumstantial and alibi evidence is available. Its comparatively expensive, time-consuming and rarely the definitive evidence that TV suggests. When it comes to cybersecurity investigations, instead of swabs, fingerprints and fibers, a key source of evidence are system logs. Everything from applications to devices is capable of generating an audit trail, ‘logging’ activities and events. At its simplest, if we have a record of logons to a system, and we know when our breach happened, we have a cyber ‘smoking gun’. If we can use log data for a reconstruction post-attack, why can’t log events be used to pre-empt a breach, providing an early warning that suspicious activity is taking place? This is the promise of contemporary SIEM technology, an automated system to capture sufficient evidence to not just understand the timeline of a breach, but to detect the warning signs of an attack before it happens.


Security-by-Design Principles Are Vital in Crisis Mode

Cybersecurity
As organizations move to expand remote working and automation capabilities during the crisis, they are more likely to make mistakes. “You can’t let either the technology or the new business processes outpace the security behind it. You need to ensure that your internal security team is a part of every decision you make regarding new technology, processes or ways of working.” Experts recommend making security a consideration at the earliest possible stage when planning on technology deployments. “Make sure you bring in the stakeholders, the business as well as the operators into security discussions,” recommended Bob Martin, co-chair of the Software Trustworthiness Task Group at Industrial Internet Consortium. “You need to consider [security] as one of the primary aspects of any solution and, like the foundations of a house, everything else is built on top of that,” said Andrew Jamieson, director, security and technology at UL. Organizations that neglect to build a correct foundation risk rebuilding it or “at least spend a great deal of time and effort fixing something that could have been much more easily remedied earlier on,” Jamieson said.


CD Foundation Serves Up Tekton Pipelines Beta

CD Foundation
The beta release of Tekton Pipelines is significant because it signals that the project is now stable enough to be incorporated in DevOps platforms and from here on will follow the same deprecation policies as Kubernetes in terms of supporting previous releases. However, Wilson noted that Tekton Triggers, Tekton Dashboard, Tekton Pipelines CLI and other components are still alpha and as such may evolve from release to release in a way that is not necessarily backward-compatible just yet. In the meantime, the Tekton Pipeline team is encouraging all Tekton projects and users to migrate their integrations to the latest version of Custom Resource Definition (CRD), which is the application programming interface (API) supplied. The Tekton Pipeline team is also making available a migration guide. The Tekton Pipelines project is one of several initiatives being advanced under the guidance of the CD Foundation, which is an arm of The Linux Foundation. Other projects include Jenkins and Jenkins X, a pair of open source CI/CD projects developed originally by CloudBees and Spinnaker, a CD platform originally created by Netflix.


ARming a new industry: Manufacturing can fully realise the potential of AR


AR is a frontrunner to help minimise machine downtime and streamline the supply chain process. For instance, when engineers need to communicate with off-site experts to maintain machinery, on-screen 3D annotations can be used to direct less experienced technicians. This is a crucial aspect of AR as it can help to address any skill gap deficits being experienced. Being able to access the knowledge of an expert technician to support in-house or field technicians decreases the amount of time needed to repair machines and get them back up and running. The technology is also being used as an invaluable training tool, allowing manufacturers to assess and maintain more stringent levels of quality control, as well as developing talented engineers. Furthermore, AR can help in more recent developments such as the proactive maintenance process. Using advanced analytics, manufacturers can identify potential errors and use remote experts and AR annotated displays to guide on-the-ground workers to fix problems before they become a major threat to the manufacturing line.


Zoom, Netflix discuss remote network management challenges


Application performance problems are typically not network problems and deal more with UX. As more employees work from home, IT teams may assume UX issues stem from the organization's network rather than the user's application performance. These issues may also cause network engineers to doubt their skill sets in this unfamiliar territory, Viavi said. However, if a business aims to operate as usual -- even in an unusual time -- then network engineers should likewise go about network issues and remote network management as usual. This means conducting packet analysis and other standard troubleshooting techniques to determine whether an issue stems from the business network or from a user's application or network connection. Netflix's Temkin said his team faced occasional strain in last-mile connections, as did Dzmitry Markovich, senior director of engineering at Dropbox.


What is artificial narrow intelligence (ANI)?

artificial intelligence under construction
Narrow AI systems are good at performing a single task, or a limited range of tasks. In many cases, they even outperform humans in their specific domains. But as soon as they are presented with a situation that falls outside their problem space, they fail. They also can’t transfer their knowledge from one field to another. For instance, a bot developed by the Google-owned AI research lab DeepMind can play the popular real-time strategy game StarCraft 2 at championship level. But the same AI will not be able to play another RTS game such as Warcraft or Command & Conquer. While narrow AI fails at tasks that require human-level intelligence, it has proven its usefulness and found its way into many applications. Your Google Search queries are answered by narrow AI algorithms. A narrow AI system makes your video recommendations in YouTube and Netflix, and curates your Weekly Discovery playlist in Spotify. Alexa and Siri, which have become a staple of many people’s lives, are powered by narrow AI. In fact, in most cases that you hear about a company that “uses AI to solve problem X” or read about AI in the news, it’s about artificial narrow intelligence.


Identity as the New Perimeter


“The question becomes, what happens after the employee connects to your network? Do you have a way to trace the access that that employee is obtaining? Do you have a way to validate if those are legitimate access requests or if something malicious is taking off?  “What we see today is that many organizations rely only on perimeter security. What Siverfort does is enable you to extend your multi-factor authentication beyond the perimeter to any access, whether it’s on-premise or whether it’s in the cloud. No matter the application, whether it is a homegrown application or an IoT device.” So, why are too many sensitive systems still not using MFA? Traditional MFA solutions are difficult to deploy. They require software agents or proxies. They often require a custom integration with legacy systems. Our work environments and IT infrastructures have evolved. Our world is changing at breakneck speed. New ways of looking at security are needed.


What Is The Hiring Process Of Data Scientists At IBM?

IBM
The technical skills that IBM looks for in data science candidates encompasses ML Ops, which includes some of the newer skills, like debiasing and machine learning model runtime management.  “In addition to that, they need to possess adequate skills in the areas of Data ops, data wrangling and domain knowledge, which is essentially a cross section between industry knowledge and applicability of machine learning in those industries,” says Chahal. Although the company does not overemphasize candidates’ educational background, they need to have a good grasp of the relevant competencies mentioned above. With several platforms abound with machine learning certifications, Chahal feels that that may be a good approach for data science aspirants to upskill themselves. “These certifications can verify their awareness about various platforms, tools, libraries and packages that are being used across enterprises today, as well as the familiarity or the ability to work with open source or enterprise/vendor-specific tools.”



Quote for the day:


"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek