Daily Tech Digest - September 26, 2021

You don't really own your phone

When you purchase a phone, you own the physical parts you can hold in your hand. The display is yours. The chip inside is yours. The camera lenses and sensors are yours to keep forever and ever. But none of this, not a single piece, is worth more than its value in scrap without the parts you don't own but are graciously allowed to use — the copyrighted software and firmware that powers it all. The companies that hold these copyrights may not care how you use the product you paid a license for, and you don't hear a lot about them outside of the right to repair movement. Xiaomi, like Google and all the other copyright holders who provide the things which make a smartphone smart, really only wants you to enjoy the product enough to buy from them the next time you purchase a smart device. Xiaomi pissing off people who buy its smartphones isn't a good way to get those same people to buy another or buy a fitness band or robot vacuum cleaner. When you set up a new phone, you agree with these copyright holders that you'll use the software on their terms.

Edge computing has a bright future, even if nobody's sure quite what that looks like

Edge computing needs scalable, flexible networking. Even if a particular deployment is stable in size and resource requirements over a long period, to be economic it must be built from general-purpose tools and techniques that can cope with a wide variety of demands. To that end, software defined networking (SDN) has become a focus for future edge developments, although a range of recent research has identified areas where it doesn't yet quite match up to the job. SDN's characteristic approach is to divide the task of networking into two tasks of control and data transfer. It has a control plane and a data plane, with the former managing the latter by dynamic reconfiguration based on a combination of rules and monitoring. This looks like a good match for edge computing, but SDN typically has a centralised control plane that expects a global view of all network activity. ... Various approaches – multiple control planes, increased intelligence in edge switch hardware, dynamic network partitioning on demand, geography and flow control – are under investigation, as are the interactions between security and SDN in edge management.

TangleBot Malware Reaches Deep into Android Device Functions

In propagation and theme, TangleBot resembles other mobile malware, such as the FluBot SMS malware that targets the U.K. and Europe or the CovidLock Android ransomware, which is an Android app that pretends to give users a way to find nearby COVID-19 patients. But its wide-ranging access to mobile device functions is what sets it apart, Cloudmark researchers said. “The malware has been given the moniker TangleBot because of its many levels of obfuscation and control over a myriad of entangled device functions, including contacts, SMS and phone capabilities, call logs, internet access, [GPS], and camera and microphone,” they noted in a Thursday writeup. To reach such a long arm into Android’s internal business, TangleBot grants itself privileges to access and control all of the above, researchers said, meaning that the cyberattackers would now have carte blanche to mount attacks with a staggering array of goals. For instance, attackers can manipulate the incoming voice call function to block calls and can also silently make calls in the background, with users none the wiser. 

Why CEOs Should Absolutely Concern Themselves With Cloud Security

Probably the biggest reason cybersecurity needs to be elevated to one of your top responsibilities is simply that, as the CEO, you call most of the shots surrounding how the business is going to operate. To lead anyone else, you have to have a crystal-clear big picture of how everything interconnects and what ramifications threats in one area have to other areas. Additionally, it’s up to you to hire and oversee people who truly understand servers and cloud security and who can build a secure infrastructure and applications. That said, virtually all businesses today are “digital” businesses in some sense, if that means having a website, an app, processing credit cards with point of sale readers or using the ‘net for your social media marketing. All of these things can be potential points of entry for hackers, who happily take advantage of any vulnerability they can find. And with more people working remotely and generally enjoying a more mobile lifestyle, the risks of cloud computing are here to stay.

Better Incident Management Requires More than Just Data

To the uninitiated, all complexity looks like chaos. Real order requires understanding. Real understanding requires context. I’ve seen teams all over the tech world abuse data and metrics because they don’t relate it to its larger context: what are we trying to solve and how might we be fooling ourselves to reinforce our own biases? In no place is this more true in the world of incident management. Things go wrong in businesses, large and small, every single day. Those failures often go unreported, as most people see failure through the lens of blame, and no one wants to admit they made a mistake. Because of that fact, site reliability engineering (SRE) teams establishing their own incident management process often invest in the wrong initial metrics. Many teams are overly concerned with reducing MTTR: mean time to resolution. Like the British government, those teams are overly relying on their metrics and not considering the larger context. Incidents are almost always going to be underreported initially: people don’t want to admit things are going wrong.

Three Skills You’ll Need as a Senior Data Scientist

In the light of data science, I would say, critical thinking is, answering the “why”s in your data science project. Before elaborating what I mean, the most important prerequisite is, know the general flow of a data science project. The diagram below shows that. This is a slightly different view to the cyclic series of steps you might see elsewhere. I think this is a more realistic view than seeing it as a cycle. Now off to elaborating. In a data science project, there are countless decisions you have to make; supervised vs unsupervised learning, selecting raw fields of data, feature engineering techniques, selecting the model, evaluation metrics, etc. Some of these decisions would be obvious, like, if you have a set of features, and a label associated with it, you’d go with supervised learning instead of unsupervised learning. A seemingly tiny checkpoint you overlooked might be enough. And it can cost money for the company and put your reputation on the line. When you answer not just “what you’re doing”, but also “why you’re doing”, it closes down most of the cracks, where problems like above can seep in.

The Benefits and Challenges of Passwordless Authentication

Passwordless authentication is a process that verifies a user's identity with something other than a password. It strengthens security by eliminating password management practices and the risk of threat vectors. It is an emerging subfield of identity and access management and will revolutionize the way employees work. ... asswordless authentication uses some modern authentication methods that reduce the risk of being targeted via phishing attacks. With this approach, employees won't need to provide any sensitive information to the threat actors that give them access to their accounts or other confidential data when they receive a phishing email. ... Passwordless authentication appears to be a secure and easy-to-use approach, but there are challenges in its deployment. The most significant issue is the budget and migration complexity. While setting up a budget for passwordless authentication, enterprises should include costs for buying hardware and its setup and configuration. Another challenge is dealing with old-school mentalities. Most IT leaders and employees are reluctant to move away from traditional security methods and try new ones.

Using CodeQL to detect client-side vulnerabilities in web applications

The idea of CodeQL is to treat source code as a database which can be queried using SQL-like statements. There are lots of languages supported among which is JavaScript. For JavaScript both server-side and client-side flavours are supported. JS CodeQL understands modern editions such as ES6 as well as frameworks like React (with JSX) and Angular. CodeQL is not just grep as it supports taint tracking which allows you to test if a given user input (a source) can reach a vulnerable function (a sink). This is especially useful when dealing with DOM-based Cross Site Scripting vulnerabilities. By tainting a user-supplied DOM property such as location.hash one can test if this value actually reaches one of the XSS sinks, e.g. document.innerHTML or document.write(). The common use-case for CodeQL is to run a query suite against open-source code repositories. To do so you may install CodeQL locally or use https://lgtm.com/. For the latter case you should specify a GitHub repository URL and add it as your project. 

Moving beyond agile to become a software innovator

Experience design is a specific capability focused on understanding user preferences and usage patterns and creating experiences that delight them. The value of experience design is well established, with organizations that have invested in design exceeding industry peers by as much as 5 percent per year in growth of shareholder return. What differentiates best-in-class organizations is that they embed design in every aspect of the product or service development. As a core part of the agile team, experience designers participate in development processes by, for example, driving dedicated design sprints and ensuring that core product artifacts, such as personas and customer journeys, are created and used throughout product development. This commitment leads to greater adoption of the products or services created, simpler applications and experiences, and a substantial reduction of low-value features. ... Rather than approaching it as a technical issue, the team focused on addressing the full onboarding journey, including workflow, connectivity, and user communications. The results were impressive. The team created a market-leading experience that enabled their first multimillion-dollar sale only four months after it was launched and continued to accelerate sales and increase customer satisfaction.

The relationship between data SLAs & data products

The data-as-a-product model intends to mend the gap that the data lake left open. In this philosophy, company data is viewed as a product that will be consumed by internal and external stakeholders. The data team’s role is to provide that data to the company in ways that promote efficiency, good user experience, and good decision making. As such, the data providers and data consumers need to work together to answer the questions put forward above. Coming to an agreement on those terms and spelling it out is called a data SLA. An SLA stands for a service-level agreement. An SLA is a contract between two parties that defines and measures the level of service a given vendor or product will deliver as well as remedies if they fail to deliver. They are an attempt to define expectations of the level of service and quality between providers and consumers. They’re very common when an organization is offering a product or service to an external customer or stakeholder, but they can also be used between internal teams within an organization.

Quote for the day:

"If you can't handle others' disapproval, then leadership isn't for you." -- Miles Anthony Smith

Daily Tech Digest - September 25, 2021

Top 5 Objections to Scrum (and Why Those Objections are Wrong)

Many software development teams are under pressure to deliver work quickly because other teams have deadlines they need to meet. A common objection to Agile is that teams feel that when they have a schedule to meet, a traditional waterfall method is the only way to go. Nothing could be further from the truth. Not only can Scrum work in these situations, but in my experience, it increases the probability of meeting challenging deadlines. Scrum works well with deadlines because it’s based on empiricism, lean thinking, and an iterative approach to product delivery. In a nutshell, empiricism is making decisions based on what is known. In practice, this means that rather than making all of the critical decisions about an initiative upfront, when the least is known, Agile initiatives practice just-in-time decision-making by planning smaller batches of work more often. Lean thinking means eliminating waste to focus only on the essentials, and iterative delivery involves delivering a usable product frequently.

The Future Is Data Center as a Service

The fact is that whether we realize it or not, we’ve gotten used to thinking of the data center as a fluid thing, particularly if we use cluster paradigms such as Kubernetes. We think of pods like tiny individual computers running individual applications, and we start them up and tear them down at will. We create applications using multicloud and hybrid cloud architectures to take advantage of the best situation for each workload. Edge computing has pushed this analogy even further, as we literally spin up additional nodes on demand, with the network adjusting to the new topology. Rightfully so; with the speed of innovation, we need to be able to tear down a data center that is compromised or bring up a new one to replace it, or to enhance it, at a moment’s notice. In a way, that’s what we’ve been doing with public cloud providers: instantiating “hardware” when we need it and tearing it down when we don’t. We’ve been doing this on the cloud providers’ terms, with each public cloud racing to lock in as many companies and workloads as possible with a race to the bottom on cost so they can control the conversation.

DevSecOps: 5 ways to learn more

There’s a clear connection between DevSecOps culture and practices and the open source community, a relationship that Anchore technical marketing manager Will Kelly recently explored in an opensource.com article, “DevSecOps: An open source story.” As you build your knowledge, getting involved in a DevSecOps-relevant project is another opportunity to expand and extend your experience. That could range from something as simple as joining a project’s community group or Slack to ask questions about a particular tool, to taking on a larger role as a contributor at some point. The threat modeling tool OWASP Threat Dragon, for example, welcomes new contributors via its Github and website, including testers and coders.  ... The value of various technical certifications is a subject of ongoing – or at least on-again, off-again – debate in the InfoSec community. But IT certifications, in general, remain a solid complementary career development component. Considering a DevSecOps-focused certification track is in itself a learning opportunity since any credential worth more than a passing glance should require some homework to attain.

How Medical Companies are Innovating Through Agile Practices

Within regulatory constraints, there is plenty of room for successful use of Agile and Lean principles, despite the lingering doubts of some in quality assurance or regulatory affairs. Agile teams in other industries have demonstrated that they can develop without any compromise to quality. Additional documentation is necessary in regulated work, but most of it can be automated and generated incrementally, which is a well-established Agile practice. Medical product companies are choosing multiple practices, from both Agile and Lean. Change leaders within the companies are combining those ideas with their own deep knowledge of their organization’s patterns and people. They’re finding creative ways to achieve business goals previously out of reach with traditional “big design up front” practices. ... Our goal here is to show how the same core principles in Agile and Lean played out in very different day-to-day actions at the companies we profiled, and how they drove significant business goals for each company.

The Importance of Developer Velocity and Engineering Processes

At its core, an organization is nothing more than a collection of moving parts. A combination of people and resources moving towards a common goal. Delivering on your objectives requires alignment at the highest levels - something that becomes increasingly difficult as companies scale. Growth increases team sizes creating more dependencies and communication channels within an organization. Collaboration and productivity issues can quickly arise in a fast-scaling environment. It has been observed that adding members to a team drives inefficiency with negligible benefits to team efficacy. This may sound counterintuitive but is a result of the creation of additional communication lines, which increases the chance of organizational misalignment. The addition of communication lines brought on by organization growth also increases the risk of issues related to transparency as teams can be unintentionally left “in the dark.” This effect is compounded if decision making is done on the fly, especially if multiple people are making decisions independent of each other.

Tired of AI? Let’s talk about CI.

Architectures become increasingly complex with each neuron. I suggest looking into how many parameters GPT-4 has ;). Now, you can imagine how many different architectures you can have with the infinite number of configurations. Of course, hardware limits our architecture size, but NVIDIA (and others) are scaling the hardware at an impressive pace. So far, we’ve only examined the computations that occur inside the network with established weights. Finding suitable weights is a difficult task, but luckily math tricks exist to optimize them. If you’re interested in the details, I encourage you to look up backpropagation. Backpropagation exploits the chain rule (from calculus) to optimize the weights. For the sake of this post, it’s not essential to understand how the learning of the weights, but it’s necessary to know backpropagation does it very well. But, it’s not without its caveats. As NNs learn, they optimize all of the weights relative to the data. However, the weights must first be defined — they must have some value. This begs the question, where do we start?

How do databases support AI algorithms?

Oracle has integrated AI routines into their databases in a number of ways, and the company offers a broad set of options in almost every corner of its stack. At the lowest levels, some developers, for instance, are running machine learning algorithms in the Python interpreter that’s built into Oracle’s database. There are also more integrated options like Oracle’s Machine Learning for R, a version that uses R to analyze data stored in Oracle’s databases. Many of the services are incorporated at higher levels — for example, as features for analysis in the data science tools or analytics. IBM also has a number of AI tools that are integrated with their various databases, and the company sometimes calls Db2 “the AI database.” At the lowest level, the database includes functions in its version of SQL to tackle common parts of building AI models, like linear regression. These can be threaded together into customized stored procedures for training. Many IBM AI tools, such as Watson Studio, are designed to connect directly to the database to speed model construction.

A Comprehensive Guide to Maximum Likelihood Estimation and Bayesian Estimation

An estimation function is a function that helps in estimating the parameters of any statistical model based on data that has random values. The estimation is a process of extracting parameters from the observation that are randomly distributed. In this article, we are going to have an overview of the two estimation functions – Maximum Likelihood Estimation and Bayesian Estimation. Before having an understanding of these two, we will try to understand the probability distribution on which both of these estimation functions are dependent. The major points to be discussed in this article are listed below. ... As the name suggests in statistics it is a method for estimating the parameters of an assumed probability distribution. Where the likelihood function measures the goodness of fit of a statistical model on data for given values of parameters. The estimation of parameters is done by maximizing the likelihood function so that the data we are using under the model can be more probable for the model.

DORA explorers see pandemic boost in numbers of 'elite' DevOps performers

DORA has now added a fifth metric, reliability, defined as the degree to which one "can keep promises and assertions about the software they operate." This is harder to measure, but nevertheless the research on which the report is based asked tech workers to self-assess their reliability. There was a correlation between reliability and the other performance metrics. According to the report, 26 per cent of those polled put themselves into the elite category, compared to 20 per cent in 2019, and seven per cent in 2018. Are higher performing techies more likely to respond to the survey? That seems likely, and self-assessment is also a flawed approach; but nevertheless it is an encouraging trend, presuming agreement that these metrics and survey methodology are reasonable. Much of the report reiterates conventional DevOps wisdom. NIST's characteristics of cloud computing [PDF] are found to be important. "What really matters is how teams implement their cloud services, not just that they are using cloud technologies," the researchers said, including things like on-demand self service for cloud resources.

Why Our Agile Journey Led Us to Ditch the Relational Database

Despite our developers having zero prior experience with MongoDB prior to our first release, they still were able to ship to production in eight weeks while eliminating more than 600 lines of code, coming in under time and budget. Pretty good, right? Additionally, the feedback provided was that the document data model helped eliminate the tedious work of data mapping and modeling they were used to from a relational database. This amounted to more time that our developers could allocate on high-priority projects. When we first began using MongoDB in summer 2017, we had two collections into production. A year later, that had grown into 120 collections deployed into production, writing 10 million documents daily. Now, each team was able to own its own dependency, have its own dedicated microservice and database leading to a single pipeline for application and database changes. These changes, along with the hours saved not spent refactoring our data model, allowed us to cut our deployment time to minutes, down from hours or even days.

Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - September 24, 2021

Chef Shifts to Policy as Code, Debuts SaaS Offering

As for ease of use, Chef Enterprise Automation Stack (EAS) will also be available in both AWS and Azure marketplaces. The company has begun a Chef Managed Services program, and Chef EAS is also now available in a beta SaaS offering. All of these together, said Nanjundappa, will make Chef EAS “easy to access and adopt, which will help reduce overall time to value.” Looking forward, Nanjundappa said that the focus will include features like cloud security posture management (CSPM) and Kubernetes security. “We are seeing more and more compute workloads being migrated towards containers and Kubernetes. We currently offer Chef Inspec + content for CIS profiles for K8s and Docker that help secure Containers and Kubernetes,” wrote Nanjundappa. “But we will be adding additional abilities to maintain security posture in containers and Kubernetes platforms in the coming years.” More specifically, upcoming Kubernetes features will offer visibility into containers and the Kubernetes environment, scanning for common misconfigurations, vulnerability management, and runtime security.

Private vs. Public Blockchains For Enterprise Business Solutions

Not all blockchains are created equal. Businesses have always required a reasonable degree of privacy as well as control over their networks. Since the popularisation of the internet, and the advance of eCommerce, it’s been essential that companies protect their systems from outside attackers, both to preserve their workflow but also any sensitive information they might be storing. Hence, as blockchain technology becomes integrated into the modern digital workplace, it is only logical that private networks are often seen as preferable for many organizations. This is no big surprise — especially given that some of the main selling points of blockchain include a completely transparent ledger containing all data as well as the ability to move value around. And it’s clear why a business wouldn’t want just anyone to be able to access their internal network. This way, the company gets many of the benefits of the novel tech but can remain opaque to most of the world. It’s also quite valid that private blockchains are typically much more efficient than public ones. 

10 top API security testing tools

Many organizations likely don’t know how many APIs they are using, what tasks they are performing, or how high a permission level they hold. Then there is the question of whether those APIs contain any vulnerabilities. Industry and private groups have come up with API testing tools and platforms to help answer those questions. Some testing tools are designed to perform a single function, like mapping why specific Docker APIs are improperly configured. Others take a more holistic approach to an entire network, searching for APIs and then providing information about what they do and why they might be vulnerable or over-permissioned. Several well-known commercial API testing platforms are available as well as a large pool of free or low-cost open-source tools. The commercial tools generally have more support options and may be able to be deployed remotely though the cloud or even as a service. Some open-source tools may be just as good and have the backing of the community of users who created them. Which one you select depends on your needs, the security expertise of your IT teams, and budget.

Implementing risk quantification into an existing GRC program

How do risk professionals quantify risk? Using dollars and cents. Taking the information gathered in the Open FAIR model simulations, risk quantification further breaks down primary and secondary losses into six different types for each loss, allowing the organization to determine how best to categorize them. CISOs and other risk professionals can consider data points from the market, their data and additional available information. They can classify each type of data they’re inputting as high or low confidence. Primary loss equals anything that’s a direct loss to the company due to a specific event. Secondary loss includes something which may or may not occur, like reputational damage or potential lost revenue. Risk quantification also enables risk professionals to communicate risk to leaders and other stakeholders in a shared language everyone understands: dollars and cents. Quantifying risk in financial terms enables organizations to assess where their biggest loss exposures may be, conduct cost-benefit analyses for those initiatives designed to improve risk activities, and prioritize those risk mitigation activities based on their impact to the business.

The Architecture of a Web 3.0 application

Unlike Web 2.0 applications like Medium, Web 3.0 eliminates the middle man. There’s no centralized database that stores the application state, and there’s no centralized web server where the backend logic resides. Instead, you can leverage blockchain to build apps on a decentralized state machine that’s maintained by anonymous nodes on the internet. By “state machine,” I mean a machine that maintains some given program state and future states allowed on that machine. Blockchains are state machines that are instantiated with some genesis state and have very strict rules (i.e., consensus) that define how that state can transition. Better yet, no single entity controls this decentralized state machine — it is collectively maintained by everyone in the network. And what about a backend server? Instead of how Medium’s backend was controlled, in Web 3.0 you can write smart contracts that define the logic of your applications and deploy them onto the decentralized state machine. This means that every person who wants to build a blockchain application deploys their code on this shared state machine.

A Major Advance in Computing Solves a Complex Math Problem 1 Million Times Faster

That's an exciting development when it comes to tackling the most complex computational challenges, from predicting the way the weather is going to turn, to modeling the flow of fluids through a particular space. Such problems are what this type of resource-intensive computing was developed to take on; now, the latest innovations are going to make it even more useful. The team behind this new study is calling it the next generation of reservoir computing. "We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do," says physicist Daniel Gauthier, from The Ohio State University. "And reservoir computing was already a significant improvement on what was previously possible." Reservoir computing builds on the idea of neural networks – machine learning systems based on the way living brains function – that are trained to spot patterns in a vast amount of data.

Enterprise data management: the rise of AI-powered machine vision

The process of training machine learning algorithms is dramatically hindered for firms acquiring and centralising petabytes of unstructured data – whether video, picture, or sensor data. The AI development pipeline and production model tweaking are both delayed as a result of this centralised data processing method. In an industrial setting, this could result in product faults being overlooked, causing considerable financial loss or even putting lives in peril. Recently, distributed, decentralised architectures have become the preferred choice among businesses, resulting in most data being kept and processed at the edge to overcome the delay and latency challenges and address issues associated with data processing speeds. Deployment of edge analytics and federated machine learning technologies is bringing notable benefits while tackling the inherent security and privacy deficiencies of centralised systems. Take, for example, a large-scale surveillance network that continuously records video. Instead of focusing on hours of film of an empty building or street, effectively training an ML model to differentiate between certain items needs the model to assess footage in which anything new is observed.

The evolution of DRaaS

In the days in which DRaaS was born, it was not unusual for companies to maintain duplicate sets of hardware in an off-site location. Yes, they could replicate the data from their production site to the off-site location, but the expense of procuring and maintaining the secondary site was prohibitive. This led many to use the secondary location for old and retired hardware or even to use less powerful computer systems and less efficient storage to save money. DRaaS is essentially DR delivered as a service. Expert third-party providers either delivered tools or services, or both, to enable organizations to replicate their workloads to data centers managed by those providers. This cloud-based model allowed for increased agility than previous iterations of DR could easily allow, empowering businesses to run in a geographically different location as close to normal as possible while the original site was made ready for operations again. And technology improvements over the course of the 2010s only made the failover and failback process more seamless and granular.

JLL CIO: Hybrid Work, AI, and a Data and Tech Revolution

Offices typically offer multiple services, Wagoner explains. For instance, someone puts the paper in the printers. Someone helps employees with laptop problems. Someone runs the on-site cafeteria. Someone maintains the temperature and air quality of the office. As an employee, if there’s an issue, you need to go to a different group for each one of these different issues. However, JLL’s vision is to remove that friction and collect all those services into a single interface experience app for employees. “With the experience app, we eliminate you having to know that you need to go to office services for one thing and then remember the URL for the IT help desk for another thing,” Wagoner says. “We don’t even necessarily replace any of the existing technology. We just give the end user a much better, easier experience to get to what they need.” This experience app is called “Jet,” and it also can inform workers of rules for particular buildings during the pandemic. For instance, if you book a desk in a building or as you approach a building it might tell you if that building has a vaccine requirement or a masking requirement.

Intel: Under attack, fighting back on many fronts

Each processor architecture has strengths and weaknesses, and all are better or best suited to specific use cases. Intel’s XPU project, announced last year, seeks to offer a unified programming model for all types of processor architectures and match every application to its optimal architecture. XPU means you can have x86, FPGA, AI and machine-language processors, and GPUs all mixed into your network, and the app is compiled to the best suited processor for the job. That is done through the oneAPI project, which goes hand-in-hand with XPU. XPU is the silicon part, while oneAPI is the software that ties it all together. oneAPI is a heterogeneous programming model with code written in common languages such as C, C++, Fortran, and Python, and standards such as MPI and OpenMP. The oneAPI Base Toolkit includes compilers, performance libraries, analysis and debug tools for general purpose computing, HPC, and AI. It also provides a compatibility tool that aids in migrating code written in Nvidia’s CUDA to Data Parallel C++ (DPC++), the language of Intel’s GPU.

Quote for the day:

"Don't measure yourself by what you have accomplished. But by what you should have accomplished with your ability." -- John Wooden

Daily Tech Digest - September 23, 2021

The ‘Great Resignation’ is coming for software development

Companies of all sizes should be strategic about the use of developer time. Why waste human resources and attention on tasks that can be done quickly and less expensive through automation instead? The cost of a developer minute is roughly $1.65, and the cost of a compute minute for automating a formerly manual process is approximately $0.006. Bear in mind the human cost of developers working on routine, low-impact, uninteresting activities, and it’s neither a good use of engineering skills, time, or attention for someone highly trained to stay motivated. Instead, automate core building blocks as much as possible. Implement solutions that integrate easily with other tooling or processes. Remove friction for onboarding new developers allows for a simple life. A simple life means developers are innovating, not toiling. A good place to start if you haven’t already is with CI/CD. A reliable build tool allows teams to automate their processes and practice good hygiene. That way, when systems become more complex, your business will have a foundation in place to handle them (you can thank me later).

The Value Creation System

The Value Equation provides the foundational point of reference for an enterprise, both as a driver and as a constraint for its modus operandi. Bound within the confines of the Value Equation, the enterprise emerges as a conduit for value creation – essentially, as a Value Creation System made up of myriad fixed and moving parts which collude and collide to generate the products or services offered to the market. In fact, the enterprise closely resembles a living, breathing organism, in that it can self-organize, learn, adapt, diversify, specialize, and evolve “emergent properties” such as innovative thinking and conscious risk-taking behaviors. As a result, an enterprise is considered to be a complex adaptive system. What distinguishes an enterprise from other complex adaptive systems such as the stock market or the cells in an organism is the fact that it is deliberately organized around the creation of value. The enterprise is essentially a Value Creation System designed to ingest ‘raw resources’ such as data, materials, capital and labor power, and produce outputs – services, products, information – useful to and desired by their customers.

14 things you need to know about data storage management

“Setting the right data retention policies is a necessity for both internal data governance and legal compliance,” says Chris Grossman, senior vice president, Enterprise Applications, Rand Worldwide and Rand Secure Archive, a data archiving and management solution provider. “Some of your data must be retained for many years, while other data may only be needed for days.” “When setting up processes, identify the organization’s most important data and prioritize storage management resources appropriately,” says Scott-Cowley. “For example, email may be a company’s top priority, but storing and archiving email data for one particular group, say the executives, may be more critical than other groups,” he says. “Make sure these priorities are set so data management resources can be focused on the most important tasks.” ... Similarly, “look for a solution that provides the flexibility to choose where data is stored: on premise and/or in the cloud,” says Jesse Lipson, founder of ShareFile and VP & GM of Data Sharing at Citrix. “The solution should allow you to leverage existing investments in data platforms such as network shares and SharePoint.”

Big Tech & Their Favourite Deep Learning Techniques

A subsidiary of Alphabet, DeepMind remains synonymous with reinforcement learning. From AlphaGo to MuZero and the recent AlphaFold, the company has been championing breakthroughs in reinforcement learning. AlphaGo is a computer program to defeat a professional human Go player. It combines an advanced search tree with deep neural networks. These neural networks take a description of the Go board as input and process it through a number of different network layers containing millions of neuron-like connections. The way it works is — one neural network ‘policy network’ selects the next move to play, while the other neural network, called the ‘value network,’ predicts the winner of the game. ... Facebook is ubiquitous to self-supervised learning techniques across domains via fundamental, open scientific research. It looks to improve image, text, audio and video understanding systems in its products. Like its pretrained language model XLM, self-supervised learning is accelerating important applications at Facebook today — like proactive detection of hate speech. 

New Nagios Software Bugs Could Let Hackers Take Over IT Infrastructures

As many as 11 security vulnerabilities have been disclosed in Nagios network management systems, some of which could be chained to achieve pre-authenticated remote code execution with the highest privileges, as well as lead to credential theft and phishing attacks. Industrial cybersecurity firm Claroty, which discovered the flaws, said flaws in tools such as Nagios make them an attractive target owing to their "oversight of core servers, devices, and other critical components in the enterprise network." The issues have since been fixed in updates released in August with Nagios XI 5.8.5 or above, Nagios XI Switch Wizard 2.5.7 or above, Nagios XI Docker Wizard 1.13 or above, and Nagios XI WatchGuard 1.4.8 or above. "SolarWinds and Kaseya were likely targeted not only because of their large and influential customer bases, but also because of their respective technologies' access to enterprise networks, whether it was managing IT, operational technology (OT), or internet of things (IoT) devices," Claroty's Noam Moshe said in a write-up published Tuesday, noting how the intrusions targeting the IT and network management supply chains emerged as a conduit to compromise thousands of downstream victims.

Practical API Design Using gRPC at Netflix

Alex Borysov and Ricky Gardiner, senior software engineers at Netflix, note that API clients often do not use all the fields present in the responses to their requests. This transmission and computation of irrelevant information for one specific request can waste bandwidth and computational resources, increase the error rate, and increase the overall latency. The authors argue that such waste can be avoided when API clients specify which fields are relevant to them with every request. They point out that this feature is present out of the box with API standards such as GraphQL and JSON:API and question whether Netflix's wide usage of gRPC in the backend could benefit from an identical mechanism. They found that a particular message called FieldMask is defined in Protobuf, the underlying message encoding of gRPC. When included in API requests, it allows clients to list which fields are relevant and can be applied to both read and modify operations.

Ransomware is Harming Cybersecurity Strategy: What Can Organizations Do?

The answer is to layer up best-in-class protection across endpoints, servers, cloud platforms, web and email gateways, and networks. But the secret sauce in all this must be intelligence. It should help organizations understand where their highest risk vulnerabilities are internally. It can also drive visibility into broader threat activity outside the corporate perimeter—whether it’s chatter on dark web forums or new registrations of phishing sites. With open APIs and automation, organizations can integrate this intelligence seamlessly into their best-of-breed security environment, freeing up analysts to focus on high-value tasks and accelerating detection and response times. For example, a new phishing site IP address could be blocked in minutes before the group behind it has even been able to send your employees scam emails. Likewise, intelligence on new ransomware IOCs could be fed into intrusion prevention tools to enhance resilience before you’re even attacked. The right threat intel can also help red teams probe for weaknesses and proactively build stronger defenses.

To build trust with employees, be consistent

A lot of leaders seem to think they also walk the talk on culture. PwC’s survey shows that 73% of senior management think they do. But only 46% of the rest of the workforce agree. We’ve seen firsthand that this mismatch damages trust. And without trust, it can be difficult to motivate people, bring about change, and encourage the desired behaviors. One of our team members at the Katzenbach Center, a former US soldier, tells a story that accentuates the importance of leadership authenticity. In the armed forces, which rely on the ranks obeying their leaders’ instructions without question, Army leaders routinely make sure they eat only after their troops have been fed, to give a clear signal that the troops’ welfare is their top priority. But on one occasion when our colleague was a first lieutenant in the 25th Infantry Division, his entire unit was locked down because a piece of equipment was missing. “The lockdown went on all day and into the evening, and instead of hot food, we were given MRE [meal ready-to-eat] rations. But then some of the soldiers saw the commander’s wife sneaking him Burger King. After that, he was completely ineffective as a leader because no one in the unit respected him.”

What is a Blockchain and how does it work on Bitcoin?

The origins of Blockchain go back to 1991 when Stuart Haber and W. Scott Stornetta described the first work on a chain of cryptographically secured blocks. In this study, Haber and Stornetta sought to create mechanisms to create digital seals and order registered files in a unique and secure way. This represented a practical computational solution for the order and handling of digital documents so that they could not be modified or manipulated. However, its boom increased in 2008 with the arrival of the cryptocurrency Bitcoin , although it is already being used for other commercial applications, so much so that an annual growth of 51% is estimated for 2022. ... Even with these security locks, it would be possible that someone using a computer that has the ability to calculate hundreds of fingerprints per second can modify the fingerprints of the front and rear block, and thinking about this possible problem the Blockchain has a mechanism called " proof of work ", which consists of purposely delaying the process of creating the new block of information, in other words, before creating a new block the system would audit the entire chain originally created. ...

Russian-Linked Group Using Secondary Backdoor Against Targets

The newly discovered backdoor, which the researchers call "TinyTurla," has been deployed against targets in the U.S. and Germany over the last two years. More recently, however, Turla has used the malware against government organizations and agencies in Afghanistan before the country was overtaken by the Taliban in August, according to the report. "This malware specifically caught our eye when it targeted Afghanistan prior to the Taliban's recent takeover of the government there and the pullout of Western-backed military forces," according to the analysis. "Based on forensic evidence, Cisco Talos assesses with moderate confidence that this was used to target the previous Afghan government." Turla has been active since the mid-1990s and is one of the oldest operating advanced persistent threat groups that have links to Russia's FSB - formerly KGB - according to a study published in February by security researchers at VMware. The group, which typically targets government or military agencies, is also called Belugasturgeon, Ouroboros, Snake, Venomous Bear and Waterbug and is known for constantly changing techniques and methods to avoid detection.

Quote for the day:

"Risks are the seeds from which successes grow." -- Gordon Tredgold

Daily Tech Digest - September 21, 2021

Cybersecurity Priorities in 2021: How Can CISOs Re-Analyze and Shift Focus?

The level of sophistication of attacks has increased manifold in the past couple of years. Attackers leveraging advanced technology to infiltrate company networks and gain access to mission-critical assets. Given this scenario, organizations too need to leverage futuristic technology such as next-gen WAF, intelligent automation, behavior analytics, deep learning, security analytics, and so on to prevent even the most complex and sophisticated attacks. Automation also enables organizations to gain speed and scalability in the broader IT environment with ramped-up attack activity. Security solutions like Indusface's AppTrana enable all this and more. ... Remote work is here to stay, and the concept of the network perimeter is blurring. For business continuity, organizations have to enable access of mission-critical assets to employees wherever they are. Employees are probably accessing these resources from personal, shared devices and unsecured networks. CISOs need to think strategically and implement borderless security based on a zero-trust architecture.

Benefits of cloud computing: The pros and cons

Cloud computing management raises many information systems management issues that include ethical (security, availability, confidentiality, and privacy) issues, legal and jurisdictional issues, data lock-in, lack of standardized service level agreements (SLAs), and customisation technological bottlenecks, and others. Sharing a cloud provider has some associated risks. The most common cloud security issues include unauthorized access through improper access controls and the misuse of employee credentials. According to industry surveys, unauthorized access and insecure APIs are tied for the No. 1 spot as the single biggest perceived security vulnerability in the cloud. Others include internet protocol vulnerabilities, data recovery vulnerability, metering, billing evasion, vendor security risks, compliance and legal risks, and availability risks. When you store files and data in someone else's server, you're trusting the provider with your crown jewels. Whether in a cloud or on a private server, data loss refers to the unwanted removal of sensitive information, either due to an information system error or theft by cybercriminals. 

Progressing from a beginner to intermediate developer

In all your programming, you should aim to have a single source of truth for everything. This is the core idea behind DRY - Don't Repeat Yourself - programming. In order to not repeat yourself, you need to define everything only once. This plays out in different ways depending on the context. In CSS, you want to store all the values that appear time and time again in variables. Colors, fonts, max-widths, even spacing such as padding or margins are all properties that tend to be consistent across an entire project. You can often define variables for a stylesheet based on the brand guidelines, if you have access. Otherwise it's a good idea to go through the site designs and define your variables before starting. In JavaScript, every function you write should only appear once. If you need to reuse it in a different place, isolate it from the context you're working in by putting it into it's own file. You'll often see a util folder in JavaScript file structures - generally this is where you'll find more generic functions used across the app. Variables can also be sources of truth. 

SRE vs. DevOps: What are the Differences?

Site Reliability Engineering, or SRE, is a strategy that uses principles rooted in software engineering to make systems as reliable as possible. In this respect, SRE, which was made popular by Google starting in the mid-2000s, facilitates a shared mindset and shared tooling between software development and IT operations. Instead of writing software using one set of strategies and tools, then managing it using an entirely different set, SRE helps to integrate each practice together by orienting both around concepts rooted in software engineering. Meanwhile, DevOps is a philosophy that, at its core, encourages developers and IT operations teams to work closely together. The driving idea behind DevOps is that when developers have visibility into the problems IT operations teams experience in production, and IT operations teams have visibility into what developers are building as they push new application releases down the development pipeline, the end result is greater efficiency and fewer problems for everyone.

Distributed transaction patterns for microservices compared

The technical requirements for two-phase commit are that you need a distributed transaction manager such as Narayana and a reliable storage layer for the transaction logs. You also need DTP XA-compatible data sources with associated XA drivers that are capable of participating in distributed transactions, such as RDBMS, message brokers, and caches. If you are lucky to have the right data sources but run in a dynamic environment, such as Kubernetes, you also need an operator-like mechanism to ensure there is only a single instance of the distributed transaction manager. The transaction manager must be highly available and must always have access to the transaction log. For implementation, you could explore a Snowdrop Recovery Controller that uses the Kubernetes StatefulSet pattern for singleton purposes and persistent volumes to store transaction logs. In this category, I also include specifications such as Web Services Atomic Transaction (WS-AtomicTransaction) for SOAP web services. 

5 observations about XDR

Today’s threat detection solutions use a combination of signatures, heuristics, and machine learning for anomaly detection. The problem is that they do this on a tactical basis by focusing on endpoints, networks, or cloud workloads alone. XDR solutions will include these tried-and-true detection methods, only in a more correlated way on layers of control points across hybrid IT. XDR will go further than existing solutions with new uses of artificial intelligence and machine learning (AI/ML). Think “nested algorithms” a la Russian dolls where there are layered algorithms to analyze aberrant behavior across endpoints, networks, clouds, and threat intelligence. Oh, and it kind of doesn’t matter which security telemetry sources XDR vendors use to build these nested algorithms, as long as they produce accurate high-fidelity alerts. This means that some vendors will anchor XDR to endpoint data, some to network data, some to logs, and so on. To be clear, this won’t be easy: Many vendors won’t have the engineering chops to pull this off, leading to some XDR solutions that produce a cacophony of false positive alerts.

Why quantum computing is a security threat and how to defend against it

First, public key cryptography was not designed for a hyper-connected world, it wasn't designed for an Internet of Things, it's unsuitable for the nature of the world that we're building. The need to constantly refer to certification providers for authentication or verification is fundamentally unsuitable. And of course the mathematical primitives at the heart of that are definitely compromised by quantum attacks so you have a system which is crumbling and is certainly dead in a few years time. A lot of the attacks we've seen result from certifications being compromised, certificates expiring, certificates being stolen and abused. But with the sort of computational power available from a quantum computer blockchain is also at risk. If you make a signature bigger to guard against it being cracked the block size becomes huge and the whole blockchain grinds to a halt. Think of the data centers as buckets, three times a day the satellites throw some random numbers into the buckets and all data centers end up with an identical bucket full of identical sets of random information. 

Government data management for the digital age

Despite the complexity and lengthy time horizon of a holistic effort to modernize the data landscape, governments can establish and sustain a focus on rapid, tangible impact. A failure to deliver results from the outset can undermine stakeholder support. In addition, implementing use cases early on helps governments identify gaps in their data landscapes (for example, useful information that is not stored in any register) and missing functionalities in the central data-exchange infrastructure. To deliver impact quickly, governments may deploy “data labs”—agile implementation units with cross-functional expertise that focus on specific use cases. Solutions are rapidly developed, tested, iterated and, once successful, rolled out at scale. The German government is pursuing this approach in its effort to modernize key registers and capture more value. ... Organizations such as Estonia’s Information System Authority or Singapore’s Government Data Office have played a critical role in transforming the data landscape of their respective countries. 

Abductive inference: The blind spot of artificial intelligence

AI researchers base their systems on two types of inference machines: deductive and inductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives. Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems. A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. 

Software Engineering is a Loser’s Game

Nothing is more frustrating as a code reviewer than reviewing someone else’s code who clearly didn’t do these checks themselves. It wastes the code reviewer’s time when he has to catch simple mistakes like commented out code, bad formatting, failing unit tests, or broken functionality in the code. All of these mistakes can easily be caught by the code author or by a CI pipeline. When merge requests are frequently full of errors, it turns the code review process into a gatekeeping process in which a handful of more senior engineers serve as the gatekeepers. This is an unfavorable scenario that creates bottlenecks and slows down the team’s velocity. It also detracts from the higher purpose of code reviews, which is knowledge sharing. We can use checklists and merge request templates to serve as reminders to ourselves of things to double check. Have you reviewed your own code? Have you written unit tests? Have you updated any documentation as needed? For frontend code, have you validated your changes in each browser your company supports? 

Quote for the day:

"Effective leadership is not about making speeches or being liked; leadership is defined by results not attributes." -- Peter Drucker

Daily Tech Digest - September 20, 2021

Leadership and emotional intelligence

EI provides unique psychological resources to exert cognitive regulation over negative effects of emotions, whether positive or negative, to maintain the leaders’ vision or value driven behavior. In simple words, EI is defined as cognitively controlled affective (emotional) processes to perform under stressful conditions. For example, EI is required when a person or a team loses a couple of matches e.g., Indian women hockey team lost their first three matches then won next three matches and entered into the semi-final. Thus it is EI that helps manage the stress generated after consecutive successes or defeats. Otherwise, sadness, grief, fear, anxiety could have taken over their mental capacities, hence it simply means that intelligence (IQ) works well when emotions are kept under control because rationality is not an absolute construct rather it is bounded by personal and situational constraints. It facilitates in regulating the emotions in self and others both. Meanwhile, emotions release a sustainable source of energy that helps achieve one’s long-term vision and mission of transformation for an organization or a country. 

What businesses need to know about data decay

There are several scenarios that can lead to data decay. The most common occurrence is when customer records – such as sales, marketing and CRM data – are not maintained. In systems that are constantly changing and evolving to meet business needs, linkages and completeness of data sets can quickly become broken and out of date if not properly maintained. Typically, there is no single source of data in any organization but instead data repositories span multiple platforms, formats, and views. Another factor leading to data decay is the human element. Often at some point in the journey, data is manually entered. The moment a mistype or incorrect information is entered into a system, data inconsistency, poor data hygiene and decay can occur. Enterprises are copying data at an average of 12 times per file, which means that a single mistake can have a compounded impact with exponential damages. Furthermore, all data has a lifecycle — meaning data is created, used and monitored and, at some point, it becomes no longer appropriate to store and must be securely disposed of.

MicroStream 5.0 is Now Open Source

MicroStream is not a complete replacement for a database management system (DBMS), since it lacks user management, connection management, session handling etc., but in the vision of the developers of MicroStream, those features could be better implemented in dedicated server applications. MicroStream considers DBMS an inefficient way of persisting data since every database has its data structure and, hence, data must be converted and mapped with an additional layer such as an object relational mapper (ORM). These frameworks add complexity, increase latency and introduce performance loss. The MicroStream Data-Store technology removes the need for these conversions and everything can be stored directly in memory, making it super fast for queries and simplifying the architecture using just plain Java. According to their website, performance is increased by 10x for a simple query with a peak of 1000x for a complex query with aggregation (sum) compared to JPA. Alternatively, they also offer connectors for databases like Postgres, MariaDB, SQLite and Plain-file storage (even on the cloud) to persist data.

5 Techniques to work with Imbalanced Data in Machine Learning

For classification tasks, one may encounter situations where the target class label is un-equally distributed across various classes. Such conditions are termed as an Imbalanced target class. Modeling an imbalanced dataset is a major challenge faced by data scientists, as due to the presence of an imbalance in the data the model becomes biased towards the majority class prediction. Hence, handling the imbalance in the dataset is essential prior to model training. There are various things to keep in mind while working with imbalanced data. ... Upsampling or Oversampling refers to the technique to create artificial or duplicate data points or of the minority class sample to balance the class label. There are various oversampling techniques that can be used to create artificial data points. ... Undersampling techniques is not recommended as it removes the majority class data points. Oversampling techniques are often considered better than undersampling techniques. The idea is to combine the undersampling and oversampling techniques to create a robust balanced dataset fit for model training.

Why you need a personal laptop

Even if you leave your company with plenty of notice, moving a bunch of things off your work device in the last few days of your tenure could raise some eyebrows with IT — who, remember, can see everything you’re doing on that device. “Let’s say you’re going to work at a competitor,” Toohil says. “They’re gonna go through that huge audit trail, see, wow, you moved a bunch of data off this laptop in the week before you left. And that opens up a huge liability for you personally. At a minimum, you’re going to spend some time explaining what you were doing. In the worst case, you took some corporate information.” ... And if things go wrong, the list of embarrassing possibilities is endless: do you really want to be this woman, who received a text message about pooping on her computer while sharing her screen with executives? Or this employee, who accidentally posted fetish porn in a company-wide group chat? ...  If you’re mixing work and pleasure on one device, just one mistaken email attachment or one incorrect copy / paste could lead to scenarios that aren’t just embarrassing but could harm your relationships with co-workers and even jeopardize your job.

What is developer relations? Understanding the 'glue' that keeps software and coders together

Developer relations can take different forms and can mean different things to different organizations. It can involve: talking about a vendor's app at a conference; creating tutorials and walkthrough videos for YouTube; creating app resources for GitHub or responding to questions from developers on Stack Overflow. At its core, however, DevRel is about building rapport with the developer community and leveraging this to figure out how to build successful software applications. In this sense, developer relations is about closing the feedback loop and creating a bridge between the people who use the software and the wider organization, says Lorna Mitchell, head of developer relations at open-source software company, Aiven. "You need a way to speak to your developers," Mitchell tells ZDNet. "You have to be there – to be in the communities where the developers are. If someone has a question about your product on Stack Overflow, you want to be responding to that." Mitchell describes developer relations as a "glue" role, which is why it's common to see it report into different parts of an organization.

Curate Your Thought Leadership: 3 Pro Tips

Of course goals, habit and systems need to be measured to determine if you are making headway. Social channels don’t let us peek behind the curtain of the algorithm, making it a challenge to see if we are getting the most traction. Duritsa’s metric of choice is to see how many viewers click through to his profile. Your goal might be to measure engagement with your posts. Social media expert Marie Incontrera of Incontrera Consulting works with clients on thought leadership strategies that include social media, podcasting and speaking engagements, including TEDx. She suggests a win is a 1% engagement rate for LinkedIn. For example, if your post gets 100 views, then one engagement – a reaction or a comment – is good. She also says that if you post every day you’ll goose the algorithm into “super poster” status where your posts are amplified further than if you post less frequently. As you track your results, look at what topics get the most attention from your audience, determine what is resonating with them, then dial up what they like and phase out the types of posts that might fall flat. 

What role must CDO’s play in today’s new working environment

As businesses look for ways to insulate themselves from future shock and deliver new and constantly evolving ways to deliver ROI, the workforce will need to embrace new data skills and technologies to provide insights faster and speed decision making to inform the business. This all mandates the need for a digital cartographer — a CDO — whose role will be to help prioritise the dissemination of data to improve data access and the development of an always-on approach to upskilling and a data culture across the business. Through spearheading data democratisation across the organisation, a CDO can empower a dispersed hybrid data literate workforce to deliver data-led insights by providing them with the right data tools to make that goal a reality. By providing access to data and analytics through easy-to-use, code-friendly self-service platforms, the CDO can create space for employees who want to upskill and become skilled knowledge workers themselves. Democratising access to these resources puts data science tools into the hands of the people with problems to solve – not exclusively those with years of experience or a specific university degree.

How to Craft Incident Response Procedures After a Cybersecurity Breach

As with all types of battles, cybersecurity is a game of preparation. Long before an incident occurs, trained security teams should know how to execute an incident response procedure in a timely and effective manner. To prepare your incident response plan, you must first review your existing protocols and examine critical business areas that could be targeted in an attack. Then, you must work to train your current teams to respond when a threat occurs. You must also conduct regular threat exercises to keep this training fresh in everyone's minds. ... Even with the best preparation, breaches still happen. For this reason, the next stage of an incident response procedure is to actively monitor possible threats. Cybersecurity professionals can use many intrusion prevention systems to find an active vulnerability or detect a breach. Some of the most common forms of these systems include signature, anomaly, and policy-based mechanisms. Once a threat is detected, these systems should also alert security and management teams without causing unnecessary panic.

How to retain the best talent in a competitive cybersecurity market

In cybersecurity, employees are often exposed to several aspects of technology and innovation. What I’ve learned from several conversations with employees is that ultimately, people want to work for organizations that are developing cutting-edge technology and making a real impact in the industry. They want to contribute to the solutions that are solving today’s most important problems – and in IT security, where cyber threats are looming and threatening organizations regularly, there’s an immense opportunity to play such a rewarding, impactful role. It’s up to the employers to share a vision with employees. Employees must realize how their contributions impact the company, customers, and the landscape. Often, employees may not realize that they’re contributing to solving a major, real-world issue, so it’s up to leadership – including HR leaders – to regularly communicate why the company exists, the difference it’s making, and how each employee plays a role in the responsibility. What attracts security professionals to a company is the power and impact of the technology and the experience they can receive.

Quote for the day:

"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner