Daily Tech Digest - April 10, 2020

WiFi for Enterprise IoT: Why You Shouldn’t Use It


It’s the job of the local IT team to make sure their enterprise’s IT infrastructure is secure and reliable. Connecting dozens, hundreds, or even thousands of devices to that IT infrastructure poses a high risk to both security and reliability while offering little upside to the IT team. It may be true that your IoT solution will generate immense value for the enterprise to which you’re deploying, but this value is often not to the IT team directly. The local IT team will have other internal requests on their plate, and providing you support so you can deploy your IoT solution will likely be low on their list of priorities. This means that the stakeholders who you need most, due to their understanding of and control over the local WiFi setup, are least incentivized to help you. Let me be clear, I’m not attacking IT teams generally, but I’m pointing out the inherent misalignment of incentives even with the most capable and well-meaning IT teams. ... The lack of end-to-end control means that the success or failure of your IoT solution doesn’t rest solely within your hands. Customers don’t care why their shiny new IoT solution isn’t working and that it’s not your fault, they just care that it isn’t working.



10 Ways to Spot a Security Fraud

The Latin phrase "caveat emptor" has become an English proverb, and for good reason. "Let the buyer beware" is an axiom that nearly all of us are familiar with. Most of us know the phrase in the context of retail purchases. We were taught, or have learned over time, to never take sellers at their word. We must always perform the appropriate research before making a purchase. In security, unfortunately, we must practice a different type of caveat emptor. In recent years, security has become a hot field. And sadly, where there is budget and focus, there are also frauds and deceivers. There is no shortage of people presenting themselves as security experts. Some of them truly are. The rest of them, however, are keen to take advantage of security professionals who haven't yet learned to filter the real security experts from the fakes. ... Honest, hard-working security professionals have no problem emailing or otherwise putting agreements into writing. It's very common for a meeting to result in a follow-on email with minutes and action items.


The CSI Effect Comes to Cybersecurity


The problem is that forensic science is often portrayed as providing definite and irrefutable evidence of proof when the truth is that, outside of DNA analysis, forensic science should only be used as supplementary weight to support an allegation. In reality, forensic science is used relatively sparingly, especially when eye-witness, circumstantial and alibi evidence is available. Its comparatively expensive, time-consuming and rarely the definitive evidence that TV suggests. When it comes to cybersecurity investigations, instead of swabs, fingerprints and fibers, a key source of evidence are system logs. Everything from applications to devices is capable of generating an audit trail, ‘logging’ activities and events. At its simplest, if we have a record of logons to a system, and we know when our breach happened, we have a cyber ‘smoking gun’. If we can use log data for a reconstruction post-attack, why can’t log events be used to pre-empt a breach, providing an early warning that suspicious activity is taking place? This is the promise of contemporary SIEM technology, an automated system to capture sufficient evidence to not just understand the timeline of a breach, but to detect the warning signs of an attack before it happens.


Security-by-Design Principles Are Vital in Crisis Mode

Cybersecurity
As organizations move to expand remote working and automation capabilities during the crisis, they are more likely to make mistakes. “You can’t let either the technology or the new business processes outpace the security behind it. You need to ensure that your internal security team is a part of every decision you make regarding new technology, processes or ways of working.” Experts recommend making security a consideration at the earliest possible stage when planning on technology deployments. “Make sure you bring in the stakeholders, the business as well as the operators into security discussions,” recommended Bob Martin, co-chair of the Software Trustworthiness Task Group at Industrial Internet Consortium. “You need to consider [security] as one of the primary aspects of any solution and, like the foundations of a house, everything else is built on top of that,” said Andrew Jamieson, director, security and technology at UL. Organizations that neglect to build a correct foundation risk rebuilding it or “at least spend a great deal of time and effort fixing something that could have been much more easily remedied earlier on,” Jamieson said.


CD Foundation Serves Up Tekton Pipelines Beta

CD Foundation
The beta release of Tekton Pipelines is significant because it signals that the project is now stable enough to be incorporated in DevOps platforms and from here on will follow the same deprecation policies as Kubernetes in terms of supporting previous releases. However, Wilson noted that Tekton Triggers, Tekton Dashboard, Tekton Pipelines CLI and other components are still alpha and as such may evolve from release to release in a way that is not necessarily backward-compatible just yet. In the meantime, the Tekton Pipeline team is encouraging all Tekton projects and users to migrate their integrations to the latest version of Custom Resource Definition (CRD), which is the application programming interface (API) supplied. The Tekton Pipeline team is also making available a migration guide. The Tekton Pipelines project is one of several initiatives being advanced under the guidance of the CD Foundation, which is an arm of The Linux Foundation. Other projects include Jenkins and Jenkins X, a pair of open source CI/CD projects developed originally by CloudBees and Spinnaker, a CD platform originally created by Netflix.


ARming a new industry: Manufacturing can fully realise the potential of AR


AR is a frontrunner to help minimise machine downtime and streamline the supply chain process. For instance, when engineers need to communicate with off-site experts to maintain machinery, on-screen 3D annotations can be used to direct less experienced technicians. This is a crucial aspect of AR as it can help to address any skill gap deficits being experienced. Being able to access the knowledge of an expert technician to support in-house or field technicians decreases the amount of time needed to repair machines and get them back up and running. The technology is also being used as an invaluable training tool, allowing manufacturers to assess and maintain more stringent levels of quality control, as well as developing talented engineers. Furthermore, AR can help in more recent developments such as the proactive maintenance process. Using advanced analytics, manufacturers can identify potential errors and use remote experts and AR annotated displays to guide on-the-ground workers to fix problems before they become a major threat to the manufacturing line.


Zoom, Netflix discuss remote network management challenges


Application performance problems are typically not network problems and deal more with UX. As more employees work from home, IT teams may assume UX issues stem from the organization's network rather than the user's application performance. These issues may also cause network engineers to doubt their skill sets in this unfamiliar territory, Viavi said. However, if a business aims to operate as usual -- even in an unusual time -- then network engineers should likewise go about network issues and remote network management as usual. This means conducting packet analysis and other standard troubleshooting techniques to determine whether an issue stems from the business network or from a user's application or network connection. Netflix's Temkin said his team faced occasional strain in last-mile connections, as did Dzmitry Markovich, senior director of engineering at Dropbox.


What is artificial narrow intelligence (ANI)?

artificial intelligence under construction
Narrow AI systems are good at performing a single task, or a limited range of tasks. In many cases, they even outperform humans in their specific domains. But as soon as they are presented with a situation that falls outside their problem space, they fail. They also can’t transfer their knowledge from one field to another. For instance, a bot developed by the Google-owned AI research lab DeepMind can play the popular real-time strategy game StarCraft 2 at championship level. But the same AI will not be able to play another RTS game such as Warcraft or Command & Conquer. While narrow AI fails at tasks that require human-level intelligence, it has proven its usefulness and found its way into many applications. Your Google Search queries are answered by narrow AI algorithms. A narrow AI system makes your video recommendations in YouTube and Netflix, and curates your Weekly Discovery playlist in Spotify. Alexa and Siri, which have become a staple of many people’s lives, are powered by narrow AI. In fact, in most cases that you hear about a company that “uses AI to solve problem X” or read about AI in the news, it’s about artificial narrow intelligence.


Identity as the New Perimeter


“The question becomes, what happens after the employee connects to your network? Do you have a way to trace the access that that employee is obtaining? Do you have a way to validate if those are legitimate access requests or if something malicious is taking off?  “What we see today is that many organizations rely only on perimeter security. What Siverfort does is enable you to extend your multi-factor authentication beyond the perimeter to any access, whether it’s on-premise or whether it’s in the cloud. No matter the application, whether it is a homegrown application or an IoT device.” So, why are too many sensitive systems still not using MFA? Traditional MFA solutions are difficult to deploy. They require software agents or proxies. They often require a custom integration with legacy systems. Our work environments and IT infrastructures have evolved. Our world is changing at breakneck speed. New ways of looking at security are needed.


What Is The Hiring Process Of Data Scientists At IBM?

IBM
The technical skills that IBM looks for in data science candidates encompasses ML Ops, which includes some of the newer skills, like debiasing and machine learning model runtime management.  “In addition to that, they need to possess adequate skills in the areas of Data ops, data wrangling and domain knowledge, which is essentially a cross section between industry knowledge and applicability of machine learning in those industries,” says Chahal. Although the company does not overemphasize candidates’ educational background, they need to have a good grasp of the relevant competencies mentioned above. With several platforms abound with machine learning certifications, Chahal feels that that may be a good approach for data science aspirants to upskill themselves. “These certifications can verify their awareness about various platforms, tools, libraries and packages that are being used across enterprises today, as well as the familiarity or the ability to work with open source or enterprise/vendor-specific tools.”



Quote for the day:


"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


No comments:

Post a Comment