Daily Tech Digest - March 31, 2021

What is cyber risk quantification, and why is it important?

Put simply, the idea behind quantification is to prioritize risks according to their potential for financial loss, thus allowing responsible people in a company to create budgets based on mitigation strategies that afford the best protection and return on investment. Now to the difficult part: how to incorporate cyber risk quantification. "Risk quantification starts with the evaluation of your organization's cybersecurity risk landscape," explained Tattersall. "As risks are identified, they are annotated with a potential loss amount and frequency which feeds a statistical model that considers the probability of likelihood and the financial impact." Tattersall continued, "When assessing cybersecurity projects, risk quantification supports the use of loss avoidance as a proxy for return on investment. Investments in tighter controls, assessment practices and risk management tools are ranked by potential exposure." According to Tattersall, companies are already employing cyber risk quantification. He offered the FAIR Institute's Factor Analysis of Information Risk as an example. The FAIR Institute website mentions their platform provides a model for understanding, analyzing and quantifying cyber risk and operational risk in financial terms.


What We Know (and Don't Know) So Far About the 'Supernova' SolarWinds Attack

It's not unusual for multiple nation-state attacker groups to target the same victim organization, nor even to reside concurrently and unbeknownst to one another while conducting their intelligence-gathering operations. But Supernova and the Orion supply chain attack demonstrate how nation-states also can have similar ideas yet different methods regarding how they target and ultimately burrow into the networks of their victims. Supernova homed in on SolarWinds' Orion by exploiting a flaw in the software running on a victim's server; Sunburst did so by inserting malicious code into builds for versions of the Orion network management platform. The digitally signed builds then were automatically sent to some 18,000 federal agencies and businesses last year via a routine software update process, but the attackers ultimately targeted far fewer victims than those who received the malicious software update, with fewer than 10 federal agencies affected as well as some 40 of Microsoft's own customers. US intelligence agencies have attributed that attack to a Russian nation-state group, and many details of the attack remain unknown.


World Backup Day 2021: what businesses need to know post-pandemic

For many businesses, the shift to remote working that occurred worldwide last year due to the Covid-19 outbreak brought with it an ‘always on’, omnichannel approach to customer service. As this looks set to continue meeting the needs of consumers, organisations must consider how they can protect their data continuously, with every change, update or new piece of data protected and available in real time. “Continuous data protection (CDP) is enabling this change, saving data in intervals of seconds – rather than days or months – and giving IT teams the granularity to quickly rewind operations to just seconds before disruption occurred,” said Levonai. “Completely flexible, CDP enables an IT team to quickly recover anything, from a single file or virtual machine right up to an entire site. “As more organisations join the CDP backup revolution, data loss may one day become as harmless as an April Fool’s joke. Until then, it remains a real and present danger.”... Businesses should back up their data by starting in reverse. Effective backup really starts with the recovery requirements and aligning to the business needs for continued service.


DevOps is Not Enough for Scaling and Evolving Tech-Driven Organizations

DevOps has been an evolution of breaking silos between Development and Operations to enable technical teams to be more effective in their work. However, in most organizations we still have other silos, namely: Business (Product) and IT (Tech). "BizDevOps" can be seen as an evolution from DevOps, where the two classical big silos in organizations are broken into having teams with the product and tech disciplines needed to build a product. This evolution is happening in many organizations, most of the times these are called "Product Teams". Is it enough to maximize impact as an organization? I don't think so, and that is the focus of my DevOps Lisbon Meetup talk and ideas around sociotechnical architecture and systems thinking I have been exploring. In a nutshell: we need empowered product teams, but teams must be properly aligned with value streams, which in turn must be aligned to maximize the value exchange with the customer. To accomplish this, we need to have a more holistic view and co-design of the organization structures and technical architecture.


This CEO believes it’s time to embrace idealogical diversity and AI can help

It’s important to remember that each decision from a recruiter or hiring manager contributes to a vast dataset. AI utilizes these actions and learns the context of companies’ hiring practices. This nature makes it susceptible to bias when used improperly, so it is extremely critical to deploy AI models that are designed to minimize any adverse impact. Organizations can make sure humans are in the loop and providing feedback, steering AI to learn based on skill preferences and hiring requirements. With the ongoing curation of objective data, AI can help companies achieve recruiting efficiency while still driving talent diversity. One way hiring managers can distance themselves from political bias is by relying on AI to “score” candidates based on factors such as proficiency and experience, rather than data like where they live or where they attended college. In the future, AI might also be able to mask details such as name and gender to further reduce the risk of bias. With AI, team leaders receive an objective second opinion on hiring decisions by either confirming their favored candidate or compelling them to question whether their choice is the right one.


Why AI can’t solve unknown problems

Throughout the history of artificial intelligence, scientists have regularly invented new ways to leverage advances in computers to solve problems in ingenious ways. The earlier decades of AI focused on symbolic systems. This branch of AI assumes human thinking is based on the manipulation of symbols, and any system that can compute symbols is intelligent. Symbolic AI requires human developers to meticulously specify the rules, facts, and structures that define the behavior of a computer program. Symbolic systems can perform remarkable feats, such as memorizing information, computing complex mathematical formulas at ultra-fast speeds, and emulating expert decision-making. Popular programming languages and most applications we use every day have their roots in the work that has been done on symbolic AI. But symbolic AI can only solve problems for which we can provide well-formed, step-by-step solutions. The problem is that most tasks humans and animals perform can’t be represented in clear-cut rules.


The ‘why’ of digital transformation is the key to unlocking value

Ill-prepared digital transformation projects have ripple effects. One digitalization effort that fails to produce value doesn’t just exist in a vacuum. If a technical upgrade, cloud migration, or ERP merge results in a system that looks the same as before, with processes that aren’t delivering anything new, then the decision makers will see that lack of ROI and lose interest in any further digitalization because they believe the value just isn’t there. Imagine an IT team leader saying they want fancy new dashboards and new digital boardroom features. But a digital transformation project that ends with just implementing new dashboards doesn’t change the underlying facts about what kind of data may be read on those dashboards. And if your fancy dashboards start displaying incorrect data or gaps in data sets, you haven’t just undermined the efficacy and “cool factor” of those dashboards; you’ve also made it that much harder to salvage the credibility of the project and advocate for any new digitalization in the future. What’s the value in new dashboards if you haven’t fixed the data problems underneath?


New Security Signals study shows firmware attacks on the rise

Microsoft has created a new class of devices specifically designed to eliminate threats aimed at firmware called Secured-core PCs. This was recently extended to Server and IOT announced at this year’s Microsoft Ignite conference. With Zero Trust built in from the ground up, this means SDMs will be able to invest more of their resources in strategies and technologies that will prevent attacks in the future rather than constantly defending against the onslaught of attacks aimed at them today. The SDMs in the study who reported they have invested in secured-core PCs showed a higher level of satisfaction with their security and enhanced confidentiality, availability, and integrity of data as opposed to those not using them. Based on analysis from Microsoft threat intelligence data, secured-core PCs provide more than twice the protection from infection than non-secured-core PCs. Sixty percent of surveyed organizations who invested in secured-core PCs reported supply chain visibility and monitoring as a top concern. 


7 Traits of Incredibly Efficient Data Scientists

Believe it or not, not every data analysis requires machine learning and artificial intelligence. The most efficient way to solve a problem is to use the simplest tool possible. Sometimes, a simple Excel spreadsheet can yield the same result as a big fancy algorithm using deep learning. By choosing the right algorithms and tools from the start, a data science project becomes much more efficient. While it’s cool to impress everyone with a super complex tool, it doesn’t make sense in the long run when less time could be spent using a more simple, efficient solution. ... Doing the job right the first time is the most efficient way to complete any project. When it comes to data science, that means writing code using a strict structure that makes it easy to go back and review, debug, change, and even make your code production-ready. Clear syntax guidelines make it possible for everyone to understand everyone else’s code. However, syntax guidelines aren’t just there so you can understand someone else’s chicken scratch — they’re also there so you can focus on writing the cleanest, most efficient code possible.


How insurers can act on the opportunity of digital ecosystems

First, insurers must embrace the shift to service dominant strategies and gradually establish a culture of openness and collaboration, which will be necessary for the dynamic empowerment of all players involved. Second, insurers must bring to the platform the existing organizational capabilities required for customer-centric value propositions. This means establishing experts in the respective ecosystems—for example, in mobility, health, home, finance, or well-being—and building the technological foundations necessary to integrate partners into terms-of-service catalogs and APIs, as well as to create seamless customer journeys. Finally, insurers must engage customers and other external actors by integrating resources and engaging in service exchange for mutual value generation. My wife, for example, has just signed up for a telematics policy with an insurance company that offers not only incentives for driving behavior but also value-added services, including car sales and services. She now regularly checks whether her driving style reaches the maximum level possible.



Quote for the day:

"When we lead from the heart, we don't need to work on being authentic we just are!" -- Gordon Tredgold

Daily Tech Digest - March 30, 2021

Start-ups, established enterprises, and tech: what is the cost of change?

There is no tech stack that will give you a leg-up because it’s new and different from what everybody else is using. The only thing that will give you a leg-up is something that everybody already knows how to use. But what about “this is the best tool for the job”? That way of thinking can be a myopic view of both the words ‘best’ and ‘job.’ The job is keeping the organisation in business. The best tool will occupy the ‘least worst’ position for as many problems as possible. Pragmatism wins the day. Build things that are simple, build things that are boring, and build things that are battle-tested. Isolate things that are specifically tied to one area of your business and make sure all of that is together. When you must make decisions about encapsulating or abstracting it, it’s all contained. Then you can define boundaries. Make sure you define those boundaries within a simple code base. Think about this in terms of cheap vs expensive: it’s cheap to stick to those boundaries. Understand your boundaries, be clean with them and adjust them as you’re evolving. And don’t ever stop! The cost of reshaping a function name, or its position in a code base, is extremely low relative to the cost of moving things between services.


How to avoid 4 common zero trust traps (including one that could cost you your job)

The trap most practitioners fall into is the need to understand and define every identity in their organizations. Initially, this seems simple but then you realize there are service accounts and machine and application identities. It’s even more difficult because that identity project has to include permissions and each application has its own schema for what permissions are granted. There’s no standardization. Instead, focus on the user accounts. When we start with the application ecosystems, our intent is to focus on the user and application boundary. Now if we look at identities, start with interactive logins, i.e., users who need to access an account to perform an action. Ensure non-repudiation by getting rid of generic logins, using certificates and rotating credentials. ... Most boardrooms see zero trust as a way of using any device to be able to conduct business. That should be the end result of a robust zero trust program. If it is where you start, you will be overwhelmed with breaches. The purpose of zero trust is to technically express the fact that you don’t trust any device or network. You don’t accomplish that by closing your eyes to it.


In Secure Silicon We Trust

With RoT technology, "It's possible to gain a high degree of assurance that what's expected to be running is actually running," MacDonald explains. The technology achieves this level of protection using an encrypted instruction set that is etched into the chip at the time it is manufactured. When the system boots, the chip checks this immutable signature to validate the BIOS. If everything checks out the computer loads the software stack. If there's a problem, it simply won't boot. Secure silicon doesn't directly protect against all types of threats, but it does ensure that a system is secure at the foundational level. This is critical because attackers who gain access to the BIOS or firmware can potentially bypass the operating system and tamper with encryption and antivirus software, notes Rick Martinez, senior distinguished engineer in the Client Solutions Group Office of the CTO at Dell Technologies. "It provides a reliable trust anchor for supply chain security for the platform or device," Martinez notes. Intel has introduced the SGX chip, which bypasses a system's OS and virtual machine (VM) layers while altering the way the system accesses memory. SGX also supports verification of the application and the hardware it is running.

Finding remote work a struggle? Here's how to get your team back on track

"If you want to support people who are remote working, you cannot be an old-fashioned leader. That sounds critical, but you can't be the kind of leader that is saying, 'I don't really like people who are remote working and I want to know that they're doing stuff', and then always checking that the green light's on," she says. Evdience from the Harvard Business Review suggests Dawson is onto something. HBR says business leaders must understand that being nice to each other and goofing around together is part of the work we do. The informal interactions at risk in hybrid and remote work are not distractions; instead; they foster the employee connections that feed productivity and innovation. Dawson says successful business leaders in the future will have to be more empathetic. They will have to be unafraid of asking people how they're getting on. That question will need to be posed in the right way: rather than checking up on staff to see if they're at their desks, leaders should have conversations with staff about their feelings and objectives.


NaaS: Network-as-a-service is the future, but it’s got challenges

Full adoption of NaaS is still in its early days because most enterprise network functions require physical hardware to transport data to and from endpoints and the data center or internet. That is a challenge to deliver as a service. The Layer 4-7 functions are already available in a cloud-delivery model. Over the next five-plus years, IT teams will increasingly adopt NaaS as suppliers deliver hybrid offerings that include software, cloud intelligence, and the option for management of on-premises hardware. These services will be subscription-based and pay as you go, making networking more of an operational cost than a capital cost. They will provide centralized management with the ability to easily add and remove network and security functionality. The services will enable outsourcing of enterprise network operations to providers that may include vendors and their partners who provide service level agreements (SLA) to define uptime and problem-resolution guarantees. Right now, NaaS is best suited to organizations with a lean-IT philosophy and a need to provide networking support for at-home and branch locations.


Industrial AI prepares for the mainstream – How asset-intensive businesses can get themselves ready

A future-proof industrial AI infrastructure necessitates the need to lay the groundwork for industrial AI readiness, requiring collaboration across industrial environments. In fact, the software, hardware, architecture, and personnel elements will form the building blocks of the industrial AI infrastructure. And that infrastructure is what empowers organisations to take their industrial AI proof-of-concepts and mature them into tangible solutions that drive ROI. An industrial AI infrastructure needs to accelerate time to market, build operational flexibility and scalability into AI investments and harmonise the AI model lifecycle across all applications. Roles, skills, and training are critical. Executing industrial AI relies on having the right people in charge. That means making a deliberate effort to cultivate the skills and approaches needed to create and deploy AI-powered initiatives organisation-wide. Finally, ethical and responsible AI use is predicated on transparency, and transparency involving keeping everyone in the loop: creating clear channels of communication, reliable process documentation and alignment across all stakeholders.


Operating in an increasingly digitalized world

Consumers have become less cost-conscious and more focused on sustainability, he said. Those are "top of mind issues. [Consumers] will pick slower shipping if they see it's good for the environment. They want to support their local communities so they're shopping more locally." Buyers are also looking for unique products and "no longer the same old, same old." Merchants have started creating 3D models of their products, Jaffer said. Digital transformation will help with environmental sustainability and climate change, Lapiello said. Organizations will have to fully embrace privacy, cybersecurity and artificial intelligence, he said. "By 2030, quantum computing will be available in some shape or form and will be an incredibly disruptive technology," Lapiello said. "I truly believe the current machine learning generating predictions based on correlations will become obsolete and will be replaced by causal AI, which is quite ripe and will allow for better decisions." One of the biggest changes will be that people will have moved away from using mobile phones to glasses, Hackl said. "It's not a question of will it happen, but when ... We're 3D beings in a 3D world and the content you'll consume through these glasses will have dimensions" that change what we see in our surroundings.


SD-WAN surges over past two years as MPLS plummets

“SD-WAN has dramatically increased in adoption in the past couple of years. The pandemic slowed roll-outs for a time, but increased interest in adoption. SD-WAN frees WAN managers to select a broad mix of underlay technologies, and can also boost performance.” The report aimed to offer a clear picture of how mid-size to large enterprises are adjusting to emerging WAN technologies, helping suppliers make more informed decisions. It provided an in-depth analysis based on the experiences of WAN managers from 125 companies, with those represented in the survey having a median revenue of $10bn and a range of IT managers covering the design, sourcing and management of US national, regional and global corporate wide-area computer networks. The standout finding for the study was that 43% of enterprises surveyed had installed SD-WAN in 2020, compared with just 18% in 2018. Driving this growth – and key motivators for WAN managers pursuing SD-WAN, according to the survey – were increasing site capacity and using alternative access solutions. Two-fifths of respondents preferred a co-managed SD-WAN setup and, on top of this, enterprises were running MPLS at an average of 71% of sites during the three-year period of 2018-2020.


Applying CIAM Principles to Employee Authentication

To enhance employee authentication for system access, some organizations, including Navy Federal Credit Union and the travel portal Priceline, are adopting customer identity and access management, or CIAM, procedures for their workforces. Those include dynamic authorization, continuous authentication and the use of various forms of biometrics. "With the death of user ID and password, I am trying to create digital layers of authentication on the workforce side," Malta says. "We are looking to be able to let the hybrid workforce ‘inside our network’ in a very frictionless way." Joe Dropkin, principal server engineer at Priceline, says he's been applying the concept of CIAM to employee authentication because of the shift toward applications and data storage in the cloud. “We did not want our employees to go through multiple layers of authentication to SAAS applications. The users now have single 'pane of glass' to look at,” he says. Priceline employees no longer have to log in multiple times to access different applications. Once they're authenticated, using multiple layers, they gain access to all appropriate systems, Dropkin says.


Beyond MITRE ATT&CK: The Case for a New Cyber Kill Chain

MITRE ATT&CK, by contrast, is a more modern approach focused on TTPs. It seeks to classify attackers' goals, tasks, and steps; as such, it is a much more comprehensive approach to modeling an attack. That said, MITRE ATT&CK also has its shortcomings, notably when a security team is using an XDR platform. In an automated detection scenario, defenders might see the symptoms without knowing the exact root cause, such as suspicious user behavior, and such scenarios are harder to fit into MITRE ATT&CK. Stellar Cyber, a developer of XDR technology, argues for the creation of a new framework. It envisions an XDR framework/kill chain leveraging MITRE ATT&CK on the known root causes and attackers' goals but going further regarding other data sources, such as anomalous user behavior. There is precedent for an individual vendor feeling a need to extend or amend frameworks. FireEye came up with its own version of the kill chain, which put more emphasis on attackers' ability to persist threats, while endpoint detection and response (EDR) heavyweight CrowdStrike uses MITRE ATT&CK extensively but provides a set of nonstandard categories to cover a broader range of scenarios.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - March 29, 2021

Artificial Intelligence: Blazing A Trail For The Future

With AI gaining momentum across multiple industries, its efficient use largely depends on businesses' necessity and opportunity to introduce innovation. When trying to understand whether it's a fit-for-purpose improvement, two criteria matter ― the nature of tasks and the cost of an error. First, automation is not always the best match for tasks that demand compromising, setting priorities or emotion-based decision-making. However, if your company collects, stores and processes big data, AI may become your first choice; handling huge data volumes at a high pace can streamline a business model. Second, human intelligence is the only way out (at least today) when it comes to making strategic choices like planning further guides for business development, as even a single mistake may lead to decreasing revenues or brand image deterioration. ... Data inside of a neural network is divided into three sectors. A training set ― the kit of input images of, for instance, printed circuit boards in which your neural network should identify defects, thus learning. A development set ― a new set for tuning your network depending on how well it performs on this set.


Why certificate automation is no longer just “nice to have”

Manually provisioning or registering a certificate at the right time for the right purpose is an incredibly time-intensive task. Merely deploying an SSL certificate on just one server could take up to 2 hours! And that’s just the beginning – subtasks such as documenting each certificate’s location and purpose, configuring certificates according to a myriad of endpoint devices and varying operating systems, and then confirming that each performs correctly adds to the required time and effort. Today’s enterprises need to be quick-moving and agile to keep up with constant flux and rapid change. Beyond time saved, automated deployment means reduced human-error and increased reliability and consistency. ... Certificates include the requirements and policies that enterprises use to define trust within their organization, extending the security of using only highly trusted key architectures. To ensure a certificate is always in its best possible state, organizations need to be able to quickly and efficiently revoke and replace certificates on demand and without a hefty time-intensive process. Spending 2+ hours per certificate is unreasonable: it needs to happen seamlessly and at scale.


How do I select a bot protection solution for my business?

First, it’s important to understand what bot operators are attempting to accomplish. Are they trying to deplete inventory from your site? Scrape prices to better compete? Test stolen credentials to commit fraud? By understanding the full impact bots have on your business, you can make sure the solution puts an end to your specific problems. For example, many solutions are architected to require multiple requests before detecting a bot – if so, it’s not designed to effectively stop scraping and account takeover attempts that quickly move ‘in and out.’ History has shown that attackers will adapt to your defenses. A successful bot mitigation solution has to be effective immediately, stopping new bots and never seen before attack methods. It must also stand the test of time by stopping bots months and years later. You should ask what steps are being taken towards long-term efficacy, such as deterring reverse engineering and R&D to detect new automated threats. You should seek as little configuration, maintenance and support as possible. Does the solution make your life easier or not?


7 Hiring Mistakes Companies Make While Recruiting Data Scientists

Most companies like to call a range of job roles data scientists when in reality, it could be machine learning engineer, big data developer, business intelligence analyst, data engineer, and so on. Recruiting for data scientists’ role but assigning tasks that do not sync with expectations is a big turn-off for candidates, which may lead to them quitting the job. Companies often put them in roles where, for example, only a data analyst is required. It quickly demotivates the data scientist and erodes their skill sets. The companies should be transparent on roles and responsibilities and the kind of project engagements the candidates will have. Setting the right expectation is crucial. Data science is quite popular, and many companies tend to create data science roles without knowing how data science can help the organisation. In such companies, the data scientists will have no clue as to what they are expected to do because the companies have not defined the roles in the first place. The confusion can lead to demotivation and, finally, result in the quitting or firing of the data scientist.


5G: Time to get real about what it will be used for

Private 5G? Has your company set aside funding for spectrum auctions? You’re bidding against big telcos who have spent billions, but think big. Anyway, you could use public spectrum or shared spectrum, right? Of course, you’d still need to build your own network, including towers and radios, wherever you expect to use that spectrum. Maybe the billions for spectrum wasn’t the big financial issue after all. Once you’ve deployed, everything will be great as long as nobody else on the spectrum plays dirty. Sell that to the CFO. There are companies that could justify private 5G, but it’s not a mainstream opportunity. Then there’s IoT. In pure marketing terms, it makes sense to turn devices into cellular customers when you start depleting the market among humans, but do we really think people or companies are going to pay for sensor connection via 5G on a large scale, when we already connect sensors in other ways (like Wi-Fi) for free? How many “things” have we managed to “internet” without 5G, and why wouldn’t those old ways continue to work, without adding 5G “thing-plans” to “family plans?”


Cyber Security Playing Greater Role In Energy Companies’ Digital Transformation

Other industry experts agree on the need for energy companies to improve their cyber-security defenses as they begin to rely more heavily on digital technologies such as AI and cloud computing in their operations. “Industrial cyber has become the new risk frontier and in particular, the energy vertical is the most attacked infrastructure vertical,” said Leo Simonovich, global head of Industrial Cyber and Digital Security at Siemens. “The number of attacks is increasing and the sophistication is increasing.” He said an attack against a piece of critical infrastructure such as a power plant could lead to a temporary loss of power, total shutdown of operations or worse, a public safety incident. Siemens, he said, works with its customers to shore up their cyber defenses, “using next-generation built-for-purpose technologies powered by AI to stay ahead of attackers.” Many energy companies are beginning to adopt AI — which mimics human intelligence by analyzing data in order to make decisions — to stay ahead of the cybercriminals and foreign government-backed hackers.


Quantum brain: The hidden answers to the open questions in AI

At present, Khajetoorians and the team have discovered how to connect these cobalt atoms into tailored networks. This, it is believed, can mimic behavior of a model, which is like a human brain. These efforts are centered around the attempts to make artificial intelligence a possibility at a high level of existence. Thus, the researchers have thought of constructing on black phosphorous a network of cobalt atoms. Using this method, it has been claimed that one could design a material that stores and processes information like that of a human brain. One that adapts itself based on the input. The researchers plan to scale up the entire system and come up with a large network of atoms. They also aim to dive into this quantum material that can be of applicability. Eventually, the AI engineers can construct a real machine from the quantum material. Also, they can build a self-learning computing device that turns out to be more energy efficient and smaller in size than today’s technology. But it requires to understand how quantum brain works. This is still a mystery. However, the destination seems nearby.


Women IT leaders reset the CIO-CISO relationship

The relationship between CIO and CISO can be fraught with friction, but Angelic Gibson and Christina Quaine, CIO and CISO of B2B payment service AvidXchange, represent a new emerging dynamic between IT and security, one that relies on open communication, a commitment to diversity, and a strong bond between IT leadership. It’s little secret that security has become a board-level priority for organizations across all industries, with organizations taking risk-adverse approaches to IT to stay ahead of security, compliance, and risk management issues. As security and its relationship to IT grows more complex, a solid relationship between CIOs and their CISO counterparts will be critical to delivering quality services, products, software, and hardware to customers.[ Looking to get ahead in tech?  “It’s all about building trust with our business partners that we’re providing a service to, with our customers, and with one another,” says Quaine, who heads up the cybersecurity portion of the partnership. “The trust aspect is huge for me — it’s really around doing what you say you’ll do and committing to that. ..."


Industry clouds could be the next big thing

Industry clouds are of interest because of their potential to create value for both customers and public cloud providers. Established companies in industries feeling the sting of competition from cloud-native disrupters are especially good prospects for these types of solutions. For these companies, moving their core business applications to general-purpose public clouds can be challenging because they often rely on homegrown legacy applications or industry-specific software designed for on-premise data centers. These companies face a difficult choice. Simply “lifting and shifting” applications to the cloud could result in sub-optimal performance. Yet rewriting or optimizing them for the cloud would be time consuming and costly. Industry clouds have the potential to accelerate and take the risk out of their cloud migrations. An essential component of an industry cloud is that it must address the specific requirements of the industry it is designed to serve. For example, healthcare providers place a high priority on improving the patient experience but also require high levels of security, data protection, and privacy. These are necessary to demonstrate compliance with Health Insurance Portability and Accountability Act (HIPAA) regulations.


Algorithms will soon be in charge of hiring and firing.

Algorithms can currently be tasked with making or informing decisions that are deemed "high-risk" by the TUC; for example, AI models can be used to determine which employees should be made redundant. Automated absence management systems were also flagged, based on examples of the technology wrongfully concluding that employees were absent from work, which incorrectly triggered performance processes. One of the most compelling examples is that of AI tools being used in the first stages of the hiring process for new jobs, where algorithms can be used to scrape CVs for key information and sometimes undertake background checks to analyze candidate data. In a telling illustration of how things might go wrong, Amazon's attempts to deploy this type of technology were scrapped after it was found that the model discriminated against women's CVs. According to the TUC, if left unchecked AI could, therefore, lead to greater discrimination in high-impact decisions, such as hiring and firing employees. The issue is not UK-specific. Across the Atlantic, Julia Stoyanovich, professor at NYU and founding director of the Center for Responsible AI, has been calling for more stringent oversight of AI models in hiring processes for many years.



Quote for the day:

"Every time you are tempted to react in the same old way, ask if you want to be a prisoner of the past or a pioneer of the future." -- Chopra

Daily Tech Digest - March 28, 2021

Why risk assessment is important for financial institutions in a digital era

Given that financial institutions are custodians of significant amounts of third-party data, much of which is personal and sensitive, it is imperative now more than ever to manage and assess the risks and their impact on the existing ecosystem to drive optimum value from their digital initiatives. The risks are indeed multiplied where data is involved. With the ubiquity of online banking apps and services, the likelihood of a breach is almost certain at some point and that is when banks must be prepared. As the cadence of cyberattacks increase, organisations can no longer hide internal dysfunction from external stakeholders. “[When] an inevitable breach, audit or Royal Commission happens, financial institutions will only survive the exposure if they can show that they have actually taken all reasonable steps to protect themselves,” Greaves said. Being in control of the high-risk data must be the first step in mitigating these risks. “The key to treating information risk is to have full control of that information. If an institution is unfamiliar with what data it has, who is doing what to it, and where and how it is stored within its systems, it will be unable to control it or protect it,” Greaves said.


How HR Leaders Are Preparing for the AI-Enabled Workforce

Predicting the nature of future jobs is, of course, difficult or impossible to do with precision. And even if predictions are possible, they will probably differ substantially from job to job. Nevertheless, some companies are embarking on approaches that predict the future of either all jobs in the organization, those that are particularly likely to be affected by AI, or jobs that are closely tied to future strategies. ... Some companies are making specific job predictions based on their strategies or products. In Europe, a consortium of microelectronics companies is devoting 2 billion euros to train current and future employees on electronic components and systems. General Motors is focused on training its employees to manufacture electric and autonomous vehicles. Verizon is focused on hiring and training data scientists and marketers to expand its 5G wireless technology. SAP is focused on growing employees’ skills in cloud computing, artificial intelligence development, blockchain, and the internet of things. The raging bull of machine learning has turned out to be slower and calmer than many people predicted a few years ago. But any rancher knows you should never turn your back on a bull, no matter how docile it seems.


Software Defined Everything Part 6: Infrastructure

SDxI must understand the appropriate context of users, applications, devices, and locations related to the creation of a virtual machine, container, or even a data flow or set of network attributes such as source/destination addresses and tags. Advanced infrastructure needs to be able to provide data gathering on context-relevant metrics for debugging, security and audit, performance management, and billing and marketing. Historically, context-awareness was the purview of specialized point products such as networking devices (primarily Layers 4-7) that directed and processed traffic based on rules and inspecting incoming data. But this processing only occurs at specific points in the infrastructure. SDxI applications are more demanding and need holistic context-awareness across networking, compute, and storage to optimize workload placement in context of what a user, device, or app is trying to accomplish. For example, efforts are underway to add contextual-driven automation to both private and public cloud environments via OpenStack Heat. In this model, external context-based triggers drive VMs and their computer, storage, and network resources to spin up or down to maximize performance, minimize latency, or meet appropriate business objectives.


5 fintech trends to watch out for in 2021

While digital banking has been around long before the pandemic, it spiked in usage amidst the pandemic. Research shows that about 50 per cent of consumers are using digital banking products more since the pandemic, with 87 per cent of them planning to continue this increased usage after the pandemic. This shows that digital banking has evolved from a “nice-to-have” to a “must-have” solution for consumers and businesses. However, despite the convenience in use that digital banking offers, many consumers are still weary of the dangers that digital banking solutions bring. ... Just as self-service solutions have become rampant during the pandemic to avoid possible infection, autonomous finance is expected to rise in 2021 as well. Several fintech solutions today make it possible for people to manage their money, open accounts, apply for loans, and more with just a click of a button. Thanks to AI and machine learning, these solutions are now more accessible than lining up in traditional banks and going through tedious processes. ... Bitcoin’s rising price is due to various reasons, some of which include growing institutional interest, usage as a hedge against inflation, and PayPal’s official entrance in the crypto scene.


Navigating Data Security Within Data Sharing In Today’s Evolving Landscape

Successful cross-enterprise data strategies bring a unified approach to data integration, quality, governance, and data sharing. Innovation is not through a set of siloed products. It is a single platform that moves and manages different types of data under one roof. To create a successful data management strategy and avoid any data security mishaps, chief data officers (CDOs) and their teams should start by setting up governance and establishing business rules and system controls for access. CDOs report the most success when their data sharing architecture is built on microservices that answer business questions. That is, what data is needed to provide insights into the most difficult business problems? For example, the CDO of a large Internet-based home furnishing company recently shared that when they treat data integration as a business transformation project, they receive better requirements about business needs, data security and data trust, more focus from stakeholders, and broader adoption across the organization and within roles. Another best practice approach that both encourages sharing while also only labeling trusted, vetted data sources is the concept of certified versus uncertified data sets.


Factorized layers revisited: Compressing deep networks without playing the lottery

The key principle underlying these two natural methods, neither of which requires extra hyperparameters, is that the training behavior of a factorized model should mimic that of the original (unfactorized) network. We further demonstrate the usefulness of these schemes in two settings beyond model compression where factorized neural layers are applied. The first is an exciting new area of knowledge distillation in which an overcomplete factorization is used to replace the complicated and expensive student-teacher training phase with a single matrix multiplication at each layer. The second is for training Transformer-based architectures such as BERT, which are popular models for learning over sequences like text and genomic data and whose multi-head self-attention mechanisms are also factorized neural layers. Our work is part of Microsoft Research New England’s AutoML research efforts, which seek to make the exploration and deployment of state-of-the-art machine learning easier through the development of models that help automate the complex processes involved.


The Taboo Of Remote Working And Hiring In India

Barring some of India’s major cities, good Internet connectivity is still the stuff of dreams. Fighting bugs while your strongest warrior is out cold due to poor connectivity is every CTO’s worst nightmare. It’s just not about dire circumstances though; many young developers live in shared accommodations without a personal space to focus on work. Remote work can succeed only with the implicit understanding that work time at home is as focused as work time in the office. In addition to poor Internet, the lack of facilities such as a good work desk, and a well-lit room also hamper productivity. One of the prime reasons why developers are aching to come back to office is because coding while in bed and in your PJs has an early expiry date. Then there’s connection. Indian workplaces have traditionally depended more on verbal communication than written documentation. We’d rather walk up to someone and provide feedback than write it up in precise points in an email. With remote work, both developers and managers need to adopt a different cadence of verbal and written communication that is direct and constructive.


Behavioral Psychology Might Explain What’s Holding Boards Back

Boards can only be effective if they have the ability to come to a consensus. No one wants to feel that the board is made up of factions with irreconcilable differences. Even when the board undergoes a shake-up, like the addition of an activist director, they tend to quickly reach a new equilibrium. But while consensus-building is important, boards may be too inclined to seek harmony or conformity. This can lead to groupthink, where dissenting views are not welcomed or entertained. In fact, while most boards work to solicit a range of views and come to a consensus on key issues, 36% of directors say it is difficult to voice a dissenting view on at least one topic in the boardroom. This can point to dysfunctional decision-making as the board members avoid making waves. In fact, the most common reason that directors cite for stifled dissent on their boards is the desire to maintain collegiality among their peers. Groupthink is also magnified when the board is not effectively educated on a topic, or does not have access to the right information. Board materials may come too late for members to have any real time to review and reflect on the information before a meeting.


Digital transformation: This is why CIOs need to stay brave and keep on innovating

Hackland recognises that it can be difficult for CIOs to gain funding for innovative projects, especially in organisations with competing priorities. But when there's a chance to try something new, the opportunity must be grabbed – not just in terms of the potential benefits it might bring to the company itself but also in terms of professional development. "You're learning and your people are learning," says Hackland, referring to the importance of experimentation. "They're engaged in something new, they're not just doing lights-on, which I think is really important. They're getting to play with new technologies." Which brings us back to Williams' recent foray into virtual reality, which was one such attempt to try something new. The intention was to allow users of a bespoke VR app to view and manipulate the new car in its livery in 3D. The app, which was created by an external agency, was made available for fans to download on the Apple App Store and Google Play Store. However, when pictures of the FW43B started appearing online, the team couldn't be sure if only the image data for the new car had been unpacked or whether the app itself had been compromised.


Platform Engineering As A (Community) Service

At its core, platform engineering is all about building, well, a platform. In this context, I mean an internal platform within an organisation, not a general business platform for external consumers. This platform serves as a foundation for other engineering teams building products and systems on top of it for end users. Concrete goals include: Improving developer productivity and efficiency - through things like tooling, automation and infrastructure-as-code; Providing consistency and confidence around complex cross cutting areas of concerns - such as security and reliable auto scaling; Helping organisations to grow teams in a sustainable manner to meet increased business demands. Matthew Skelton concisely defines a platform as “a curated experience for engineers (the customers of the platform)”. This phrase “curated experience” very nicely encapsulates the essence of what I have come to recognise and appreciate as being a crucial differentiator for successful platforms. Namely, it’s not just about one technology solving all your problems. Nor is it about creating a wrapper around a bunch of tech.



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick

Daily Tech Digest - March 27, 2021

A Day in the Life of a DevSecOps Manager

The goal of a DevSecOps team, in my view, is embedding application security into development through enablement, iteration, and continuous feedback – also sometimes called "shifting security left." This requires talking to other folks and making sure you can offer them something that solves your problem while enabling them to solve theirs. No one wants to "stop" producing value to take care of security concerns, which can often be how it feels to interact with security teams. Everyone already has a full roadmap. Why does this security concern need to be addressed now? Through a DevSecOps philosophy, which mostly means taking agile principles from engineering and applying them to security work, I use those aforementioned days of meetings to determine how a particular security concern can be mitigated or eradicated without adding friction to the development pipeline. ... Our DevSecOps team, for example, can write a cryptography library for engineering that uses standard libraries in an appropriate manner, avoiding common implementation mistakes that could lead to data exposure. Sometimes we may mandate a particular approach, but typically we offer a library like this to engineering and sell it as saving them development time.


Artificial Intelligence is the Key to Economic Recovery

Artificial Intelligence technologies have already tremendous economic potential in the private and business sector. The value of the global AI market in 2019 is estimated by Gartner and McKinsey at USD 1.9 trillion and forecasts for 2022 is USD 3.9 trillion. ... There are reasons to believe it will be even more so in the post-Corona era. About two years ago, before the outbreak of the plague, Prime Minister Netanyahu asked me and my colleague Professor Eviatar Matanya to lead a national initiative in the field of intelligent systems that would make Israel one of the top five countries in the world in this technology within five years. ... AI has a much wider spectrum than Cyber technology. Its applications have far-reaching implications in most areas of our lives, including security, medicine, transportation, automation, retail, sales, customer service and virtually every field relevant to modern life. The various learning algorithms, along with the tremendous increase in computing power, are already beginning to penetrate all areas of our lives, and their understanding requires mastery not only of the “natural” technological disciplines – such as computer science, mathematics and engineering – but also of social, legal, business and even philosophical aspects.


The war against the virus also fueling a war against digital fraud

The study also found that as of March 16, 2021 the 36% of consumers who said they are being targeted by digital fraud related to COVID-19 in the last three months is higher than approximately one year ago. In April 2020, 29% said they had been targeted by digital fraud related to COVID-19. In the U.S., this percentage increased from 26% to 38% in the same timeframe. Gen Z, those born 1995 to 2002, is currently the most targeted out of any generation at 42%. They are followed by Millennials (37%). Similarities were observed in the U.S. where Gen Z was most targeted at 53% followed by Millennials at 40%. “TransUnion documented a 21% increase in reported phishing attacks among consumers who were globally targeted with COVID-19-related digital fraud just from November 2020 to recently,” said Melissa Gaddis, senior director of customer success, Global Fraud Solutions at TransUnion. “This revelation shows just how essential acquiring personal credentials are for carrying out any type of digital fraud. Consumers must be vigilant and businesses should assume all consumer information is available on the dark web and have alternatives to traditional password verification in place.”


‘Hacktivism’ adds twist to cybersecurity woes

Earlier waves of hacktivism, notably by the amorphous collective known as Anonymous in the early 2010s, largely faded away under law enforcement pressure. But now a new generation of youthful hackers, many angry about how the cybersecurity world operates and upset about the role of tech companies in spreading propaganda, is joining the fray. And some former Anonymous members are returning to the field, including Aubrey Cottle, who helped revive the group’s Twitter presence last year in support of the Black Lives Matter protests. Anonymous followers drew attention for disrupting an app that the Dallas police department was using to field complaints about protesters by flooding it with nonsense traffic. They also wrested control of Twitter hashtags promoted by police supporters. “What’s interesting about the current wave of the Parler archive and Gab hack and leak is that the hacktivism is supporting antiracist politics or antifascism politics,” said Gabriella Coleman, an anthropologist at McGill University, Montreal, who wrote a book on Anonymous.


Sweden’s Fastest Supercomputer for AI Now Online

“Research in machine learning requires enormous quantities of data that must be stored, transported and processed during the training phase. Berzelius is a resource of a completely new order of magnitude in Sweden for this purpose, and it will make it possible for Swedish researchers to compete among the global vanguard in AI,” said Ynnerman. Berzelius will initially be equipped with 60 of the latest and fastest AI systems from Nvidia, with eight graphics processing units and Nvidia Networking in each. Jensen Huang is Nvidia’s CEO and founder. “In every phase of science, there has been an instrument that was essential to its advancement, and today, the most important instrument of science is the supercomputer. With Berzelius, Marcus and the Wallenberg Foundation have created the conditions so that Sweden can be at the forefront of discovery and science. The researchers that will be attracted to this system will enable the nation to transform itself from an industrial technology leader to a global technology leader,” said Huang. The facility has networks from Nvidia, application tools from Atos, and storage capacity from DDN. The machine has been delivered and installed by Atos. Pierre Barnabé is Senior Executive Vice-President and Head of the Big Data and Cybersecurity Division at Atos.


Why data classification should be every organisation’s first step on the path to effective protection

The value of classification was once limited to protection from insider threats. However, with the growth in outsider threats, classification takes on a new importance. It provides the guidance for information security pros to allocate resources towards defending the crown jewels against all threats. Internal actors cause both malicious and unintentional data loss. With a classification program in place, the mistyped email address in a message with sensitive data is flagged. Files that are intentionally being leaked are classified as sensitive and get the attention of security solutions, such as Data Loss Prevention (DLP). On the other hand, external threat actors seek data that can be monetised. Understanding which data within your organisation has the greatest value, and the greatest risk for theft, is where classification delivers value. By understanding the greater potential impact of an attack on sensitive data, advanced threat detection tools escalate alarms accordingly to allow more immediate response. Organisations generate data every day. This comes as no surprise. However, what might be surprising is the accelerating volume at which the data is being created.


6 Principles for Hybrid Work Wellbeing

Wellbeing is both an individual and a team sport. Everyone’s individual circumstances are unique—from caring for a sick parent to juggling the demands of remote learning to struggling with racial injustice. Each of us needs to define our boundaries based on what we can and can’t do—and own them. In practice, this means deciding what time you start work, deciding what time you finish work, and sticking to those commitments while communicating them to your team, whether you’re working remotely or in person. Technology can be your friend here. For example, set your status message in Teams to indicate when you're prioritizing family time. When we all own and respect boundaries, we create a culture of mutual support that promotes everyone’s wellbeing. ... Meeting bloat is one of remote work’s most counterproductive trends, though the reasons for it aren’t hard to understand. Without well-defined ways to indicate progress and participation, showing up to a meeting has become the signal of doing work. It’s the 21st-century version of punching the clock. This helps neither employees nor employers. Organizations can undercut this expectation—and the drain on wellbeing that comes from too many meetings—by fostering a meeting culture centered on preparation and purpose.


Remote working burn-out a factor in security risk

“Lockdown has been a stressful time for everyone, and while employers have admirably supported remote working with technology and connectivity, the human factor must not be overlooked,” said Margaret Cunningham, Forcepoint’s principal research scientist. “Interruptions, distractions and split attention can be physically and emotionally draining and, as such, it’s unsurprising that decision fatigue and motivated reasoning continues to grow. “Companies and business leaders need to take into account the unique psychological and physical situation of their home workers when it comes to effective IT protection. “They need to make their employees feel comfortable in their home offices, raise their awareness of IT security and also model positive behaviours. Knowing the rules, both written and implied, and then designing behaviour-centric metrics surrounding the rules can help us mitigate the negative impact of these risky behaviours.” Cunningham said that although both older and younger employees tended to report they were receiving similar levels of organisational support while working remotely, the emotional experience, and how different generations use technology, was markedly different.


Impact of Big Data on Innovation, Competitive Advantage, Productivity, and Decision Making

Advances in the field of technology enabled individuals and businesses to collect large amounts of data (structured and unstructured) from various sources like never before. Data from social media, user-generated, internet, health care, manufacturing, supply chain, financial institution, and sensors have grown exponentially. This paper’s objective is to review how big data drive and impact innovation, competitive advantage, productivity, and decision support. Methodology: A comprehensive literature review on big data and identifying the impact of big data analytics on innovation, competitive advantage, productivity, and decision support are studied. The reviewed literature created the foundation for studying, a model that was developed based on an extensive review of literature as well as case studies and future forecast by market leaders. Big data is the latest buzzword among businesses. A new model is suggested identifying big data and the correlation between innovation, competitive advantage, productivity, and decision support. Findings: A review of scholarly literature and existing case studies finds that there is a gap between existing frameworks and the integration of big data into various business and management functions and objectives.


Rethinking data strategies: Shifting the focus from technology to insights

We need to redefine data strategy. Businesses need to move away from collecting data for data’s sake. Instead, we need to focus on data-driven technological innovation that delivers meaningful customer experiences, using targeted data to provide the right insights about customers. Today, businesses are collecting data en masse. But what are the benefits of collecting this data? What insight does it provide about customers or competitors? Most businesses believe they know their customer profile, and acquire more technology and data to meet this perceived customer profile. By rethinking data strategies, however, and exploring the value of the data being collected and how it is being collected, businesses will understand their customers’ wants and needs more effectively. Indeed, knowing your customer is not only about tracking and tracing their behaviour digitally, you first need to define what kind of data insight you want to learn from your customer. Then you can work out how to leverage new data insights amassed through a targeted data collection to deliver tailored features back to the customer quickly and easily – engaging customers in a product or service when they need it most.



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - March 26, 2021

Text authentication is even worse than almost anyone thought

For years, the key argument against relying on text message confirmations is that they are susceptible to man-in-the-middle attacks, which is still true. But this peek into the authorized infrastructure for text messages means that text takeovers can happen far more simply. There are plenty of easily accessed apps that make text-like authentication far more secure, including Google Authenticator, Symantec's VIP Access, Adobe Authenticator, and Signal. Why risk unencrypted, easily stolen texts for account access or anything else? For the moment, let's set aside how relatively easy and low-cost it is to move to a more secure version of text confirmations. Let's also, for the moment, set aside the compliance and operational risks your team is taking by letting the enterprise grant account access vis unencrypted texts. How about solely looking at the risk and compliance implications of offering third-party access via unencrypted text authentications? Remember this from the Vice piece: "The (attacker) sent login requests to Bumble, WhatsApp, and Postmates, and easily accessed the accounts." Once a bad guy takes control of a customer's texts, a vast domino effect kicks in, where lots of businesses can be improperly accessed.


How to Mitigate Low-Code Security Risks

On the low-code spectrum, there is the attitude that “people aren’t really doing mission-critical applications — they’re mainly doing prototyping, or back office automations,” Wysopal said. Or, since low-code applications often do not publicly expose an application, they are deemed low-risk and fall to the bottom of the security team’s inbox and list of to-dos. “We’re still in a world where people don’t secure all applications,” he said. However, these attitudes are a bit short-sighted, since “applications aren’t islands,” Wysopal said. Even if applications are intended for internal use only, phishing attempts could expose credentials and gain access to the network. Here, attackers could leverage sensitive resources or steal infrastructure for compute power. Thus, all low-code users should be aware of potential threats and change habits to address these risks accordingly. However, the onus is also on low-code vendors to sufficiently arm their systems. Having a vulnerability disclosure, bounty program and an easy means to accept bug reports from white hat security researchers will be necessary to continually patch issues. Low-code continues to permeate more and more digital operations, opening up novel potential for citizen developers.


Smart cities are built on data

Cities must understand the full set of stakeholders who should be involved in setting data governance policies. They include civic authorities, the public, the private sector, technology providers and academic experts. Then they need to look at smart city projects as an “integrated ecosystem,” which means using the data for the collective benefit of the city, public and private sector, Chiasson said. “In a lot of cases, the costs are concentrated, but the benefits are diffuse,” he said. For instance, improving traffic flow would benefit many stakeholders, but the cost tends to be concentrated with the agency paying for sensors and algorithms. “Holistically tying that together … is the way you start to have data that’s good and data that’s valuable,” Chiasson said. And valuable means data that can be turned into usable information. For instance, a city may have ridership and traffic data, but if they can’t use it to reduce emissions, it’s not helpful. “Data is not information,” Dennis said. “Oftentimes, what we’ll see is that there’s too much data and not enough ability for the city to convert it into information holistically.” Urban SDK has a data specification for smart cities that lets them aggregate data from a range of sources and normalize it into one database. From there, the data can be analyzed into information.


GAO warns on cyber risks to power grid

The country's electrical systems are increasingly susceptible to cyberattacks, according to government auditors, and there is uncertainty about the extent to which a localized attack might cascade through power distribution systems. A new report from the Government Accountability Office examines the vulnerabilities of electricity grid distribution systems, how some states and industry actions have hardened those systems and the extent to which the Department of Energy has addressed risks by implementing the national cybersecurity strategy. Government and industry officials told GAO that a cyberattack on a grid distribution system would likely have localized effects, but a coordinated attack could have widespread consequences. However, the officials conceded that assumption is based on their professional experience, GAO noted, and none of them were aware of an assessment that confirmed their claims. "Moreover, three federal and national laboratory officials told us that even if a cyberattack on the grid's distribution systems was localized, such an attack could still have significant national consequences, depending on the specific distribution systems that were targeted and the severity of the attack's effects," according to the report.


Vaccinated Employees Returning with Un-Vaccinated Devices

With vaccines making their way throughout the country and lockdown restrictions loosening up, a debate surrounding the office return emerges. Hybrid vs. remote vs. full-time – every scenario is different, as well as the rules and accommodations to make the return safe and productive for all. User systems will be coming back as well, and their absence from the network could present new challenges. In the vast majority of cases, computers and mobile devices have not been on-premise for close to a year. Ever vigilant from a security perspective, the best way to approach the “repatriation” of these systems onto the office network is to regard them as potentially infected. At the very least, consider that these devices are not in the same state as when they left. During this time, the company office extended into the homes of employees, and the line separating home from work was essentially diminished. ... Even with device management in place, the potential for unaccounted change from field devices is significant. Consider this both a threat and an opportunity. The first goal is to ensure that workstations have been adequately patched and updated. Devices, security software and applications must be validated and brought up-to-date before actively working on networks hold sensitive, critical data.


Cyber threats, ongoing war for talent, biggest concerns for tech leaders

When it comes to talent, 44% of respondents said that finding enough qualified employees to fill open positions is the biggest risk they face over the coming year. An almost equal percentage say the task is just as difficult as it was a year ago. To close the skills gap, companies said they are trying a variety of strategies, including building flexible, on-the-job training opportunities (61%); rewriting job descriptions or job titles (42%); creating an apprenticeship program (39%); and eliminating requirements that applicants have certain types of academic degrees (24%). Some other pathways to finding the right talent: easing location restrictions; gamification of training; and prioritizing the search for candidates with a diversity of career backgrounds. In fact, nearly three-quarters of the respondents said that in the past year they’ve filled open technology positions with candidates with liberal arts degrees. An equal percentage have taken internal candidates from non-tech teams. About half of tech leaders have hired people with no college degree at all. A little over 60% of survey participants said their company is ahead of the curve when it comes to investing in new technology, and 35% said they’re about average. 


When Every Millisecond Matters in IoT

It starts with a network of seismic sensors, which are used to detect the P-waves, providing a ton of information that can be used to calculate the size and location of the damaging earthquake. The data is distributed in real-time to every subscribed party: emergency response, infrastructureand everyday users who have the app installed. The next critical piece of technology is a real-time network: a super-fast, low-latency and reliable infrastructure that is optimized to broadcast small amounts of data to huge audiences of subscribers. This may include both the earthquake data itself, as well as push notifications or alerts specified by the app developer. This is where every millisecond matters, so ensuring reliability at scale, even in unreliable environments, is mission critical. When selecting a real-time network, whether you go with a hosted service or build it yourself, app developers need to understand the underlying technology, real-time protocols and other indicators of scalability. Lastly, you need the application that connects your real-time IoT network to the deployed sensors, where notifications are transmitted and the response is automated based on incoming data.


What businesses need to know to evaluate partner cyber resilience

Protecting customer data is vital and now regulated in certain geographies with the introduction and implementation of privacy laws like the GDPR and the CCPA. Non-compliance with either of these regulations may result in large fines that can pose a serious threat to business continuity depending on the size of the company and violation. While the GDPR and the CCPA are the two of most well-known regulations, at least 25 U.S. states have data protection laws, with Virginia being the most recent to enact legislation. Legislation aside, organizations must protect data and be able to recover it in the event of any loss. Not being able to recover data, albeit at the fault of a partner, can quickly propel an organization toward financial setbacks, damaged relationships and diminished reputation. When it comes to evaluating a partner, ask them to detail their backup strategy and policies. Regular infection simulations and backup procedure tests are crucial in making sure you are prepared for a real DEFCON scenario. Businesses must have endpoint security in place as cybercriminals are constantly developing new ways to attack networks, take advantage of employee trust and steal data. In traditional office building settings, employees were better protected within the corporate network.


How one data scientist is pioneering techniques to detect security threats

It was all pretty accidental, not something I had planned. I did really well in college—I was first in my class. And I finished in 2010, in the middle of the Great Recession, which hit the Spanish labor market horribly. At that time, the unemployment rate was 25 percent. The lucky ones, like people in engineering, were getting job offers. But when you’re in technology, the only options in Spain are to work for a consulting company or to do support or sales. There weren’t any entry-level jobs in research and development. So, I started a master’s with a group doing research on biometrics. The master’s was also in computer science and very related to artificial intelligence and a lot of interconnected fields like multimedia signal processing, computer vision, and natural language processing. I did my thesis on statistics around forensic fingerprints, and the probability of a random match between a latent fingerprint found at a crime scene and a random person that could have been wrongly convicted of that crime. ... One good approach is to find an internship that has some connection between doing data science and security and fraud, even if it’s just loosely related.


What To Expect From The New US-India Artificial Intelligence Initiative

India and the US can complement each other in this collaborative effort to ensure equitable progress. “For the US, India represents a massive consumer market – and one of the world’s largest troves of data. Technology firms in the US accessing this data will be like energy firms finding oil in the Middle East,” said Prakash. “For India, the US algorithms are solutions to a variety of development challenges India faces, from bringing banking to hundreds of millions of people to modernising the Indian military to offering healthcare to the masses. At the same time, for US technology firms, India churns out massive amounts of engineers and computer scientists – critical talent that these firms need.” Another major reason for a partnership between India and the US is the new geopolitical realities. China’s growing influence in the field of AI is a pressing concern. “What India and the US bring to the table is what is a supposedly democratic governance model of emerging technology, said Basu, “Despite the change in administration from Trump to Biden, there are certain things where there is continuity – like distrust in China and Chinese technology....”



Quote for the day:

"Leadership is a dynamic process that expresses our skill, our aspirations, and our essence as human beings." -- Catherine Robinson-Walker