Daily Tech Digest - June 28, 2022

What Gartner’s top cybersecurity predictions for 2022-23 reveal

Implied in the predictions is advice to focus not just on ransomware or any other currently trending type of cyberattack, but to prioritize cybersecurity investments as core to managing risks and see them as investments in the business. By 2025, 60% of organizations will use cybersecurity risk as a primary determinant in conducting third-party transactions and business engagements, according to Gartner‘s predictions. Doubling down with greater resilience across every threat surface is key. For example, while Gartner mentions zero-trust network access (ZTNA) in just one of the eight predictions, the core concepts of ZTNA and its benefits are reflected in most of the predictions. The predictions also note that investing in preventative controls is not enough, and that there needs to be a much higher priority placed on resilience. This is because threat surfaces grow faster than many organizations can gain visibility to and protect. By 2025, it is expected that 80% of enterprises will adopt a strategy to unify web, cloud services and private application access from a single vendor’s secured service edge (SSE) platform.


Don't Get Fired: How to Sell Max Cybersecurity to the C-Suite

"So some of the strategies that I use when I'm working with the C-level teams, the boards of directors, is I don't just give them a summarization or my opinion," continued O'Neill Sr. "I bring in events from insurance -- our insurance broker or our auditors -- and I say, 'Hey, can you give me a few examples of other customers where their cybersecurity insurance didn't get renewed because of some event? Or can you give me an example of a audit that failed because proper levels of protection weren't put in place?' "And I articulate those things to the CEOs and the boards of directors. Not in long-worded descriptions, but basically like, 'Hey, you know, if you look at this year, and our actual insurance broker says that they have processed claims for a billion dollars this year because of security events where malware has been involved.' And then I show them data where I say, 'Okay, of the 100 events ... about 15 percent of those companies never survived. They did not return back to business.' Okay. 


How tech companies are responding to the talent gap

The savviest organizations are taking on the onus of training talent themselves, increasingly hiring people straight out of school, according to Jean-Marc Laouchez, president of the Korn Ferry Institute. These firms are also trying to instill a culture of continuous learning and training. “Constant learning — driven by both workers and organizations — will be central to the future of work, extending far beyond the traditional definition of learning and development,” Laouchez wrote. In that light, coding bootcamps have become talent pools for organizations looking for skills-based applicants over more traditional college graduates. Graduates from coding boot camps reported a quick ROI, higher salaries, and STEM career opportunities, according to recent survey of 3,800 US graduates of university coding bootcamps by US education company 2U and Gallup. All graduates reported they saw their salaries increase by a median of $11,000 one year after graduation, with those who moved from non-STEM to STEM jobs after graduation seeing the highest income growth.


Strategies for adopting data stewardship without a CDO

If the company has already concluded that it can’t hire a full-time CDO, the next best thing is to look at individuals in the company who have some of the skills or who have backgrounds and talents that would enable them to skill up quickly. The first place to look is in the database group. The database administrator should be charged with oversight of the development of the entire corporate data architecture. When an overall data architecture is in place, you have a structure that ensures all of your various data repositories and processes can interact with each other in enterprise-wide data exchanges and ensures you have the tools, such as APIs (application programming interfaces) and ETL (extract, transform, load), to facilitate integration. This also means eradicating stand-alone data silos that might exist within the company. ... The database group can work hand in hand with the IT security group to make sure all data is properly secured and that it meets corporate governance standards, even if the data is incoming from third-party vendors.


Secure everything, not just the weakest link

When looking at the security of links between a company and its business partners, BCS volunteer Petra Wenham says: “We must include the company’s IT in that statement and the security of a partner’s IT system.” Junade Ali, a technologist with an interest in software engineering management and computer security, points to the OAuth vulnerability as an example of the risks organisations face across their supply chains when they connect or make use of third-party systems. “In the recent past, I’ve worked on changing practices across the industry when it comes to password security,” he says. “I developed the anonymity models used by Have I Been Pwned, the developer tooling needed to improve password security practices and published scientific studies used to change the industry understanding of the best practice.” What Ali learned was that the reuse of compromised credentials from one low-value website (say, a pizza restaurant) often cascades to compromising someone’s online banking. He adds: “The message here is clear – security isn’t purely within our fiefdom and we depend on others to keep our data safe.”


How APTs Are Achieving Persistence Through IoT, OT, and Network Devices

Due to the low security and visibility of these devices, they are an ideal environment for staging secondary attacks on more valuable targets inside the victim's network. To do this, an attacker will first get into the company's network through traditional approaches like phishing. Attackers can also gain access by targeting an Internet-facing IoT device such as a VoIP phone, smart printer, or camera system, or an OT system such as a building access control system. Since most of these devices use default passwords, this type of breach is often trivial to achieve. Once on the network, the attacker will move laterally and stealthily to seek out other vulnerable, unmanaged IoT, OT, and network devices. Once those devices have been compromised, the attacker just needs to establish a communication tunnel between the compromised device and the attacker's environment at a remote location. In the case of UNC3524, attackers used a specialized version of Dropbear, which provides a client-server SSH tunnel and is compiled to operate on the Linux, Android, or BSD variants that are common on those devices.


Transforming advanced manufacturing through Industry 4.0

Organizational problems often involve low buy-in and a lack of concentration from leadership as a business attempts to see a digital transformation through. That hampers the effort’s potential success and long-term viability. Inadequate knowledge of digital capabilities and a lack of organizational talent can prevent broader buy-in and properly scaled transformative efforts. Technology roadblocks commonly include low support from partners in scaling deployment while facing multiple platform choices, which hinders an organization’s ability to move quickly into new territory. The transformation’s starting point can also stall when leaders aren’t convinced of their ability to increase the size and scope of the digital architecture they choose for implementation. AI companies have tried many approaches to overcome these barriers and realize improved performance through digital manufacturing transformations. An examination of advanced manufacturing lighthouses reveals two critical reasons that their transformations succeeded: first, they chose the right use cases; second, they looked for ways that those use cases could reinforce one another.


Cybersecurity and the metaverse: Identifying the weak spots

The metaverse is designed to function through the use of digital avatars that each user creates for themselves. Ostensibly, this avatar will be both unique and secure, which will allow the real human it represents to use their personally identifiable information (PII) and other sensitive information to make purchases, do work and even receive healthcare. In addition, through the avatar, the user can interact with others in the digital space, including working with colleagues in a virtual office. The concern, however, is that because the avatar is, fundamentally, the skeleton key to your private offline information, from your PII to your financial accounts, if a hacker gains access to your avatar, then they can open the door to your entire life. This holds the potential to take identity theft to an unprecedented level. Identity theft in the metaverse can also take another, and perhaps even more sinister, turn, however. If hackers gain control of your avatar, they may well engage in behaviors that can ruin your relationships and reputation, and may even put your offline safety at risk.


The promise of edge computing comes down to data

Move computing power to where the data is. Determining whether edge or cloud is optimal for a particular workflow or use case can cause analysis paralysis. Yet the truth is the models are complementary, not competing. “The general rule of thumb is that you’re far better moving compute to the data than vice versa,” said Robert Blumofe, executive vice president and chief technology officer at Akamai. “By doing so, you avoid back hauling, which hurts performance and is expensive.” Consider an e-commerce application that orchestrates actions like searching a product catalog, making recommendations based on history, or tracking and updating orders. “It makes sense to do the compute where that data is stored, in a cloud data warehouse or data lake,” Blumofe said. The edge, on the other hand, lends itself to computing on data that’s in motion — analyzing traffic flow to initiate a security action, for example. Go heavy on experimentation. It’s still early days in edge computing, and most companies are at the beginning of the maturity curve, evaluating how and where the model can have the most impact.


The surveillance-as-a-service industry needs to be brought to heel

The problem here is that surveillance technologies such as these have been commercialized. It means capabilities that historically have only been available to governments are also being used by private contractors. And that represents a risk, as highly confidential tools may be revealed, exploited, reverse-engineered and abused. As Google said: “Our findings underscore the extent to which commercial surveillance vendors have proliferated capabilities historically only used by governments with the technical expertise to develop and operationalize exploits. This makes the Internet less safe and threatens the trust on which users depend.” Not only this, but these private surveillance companies are enabling dangerous hacking tools to proliferate, while giving these high-tech snooping facilities available to governments — some of which seem to enjoy spying on dissidents, journalists, political opponents, and human rights workers. An even bigger danger is that Google is already tracking at least 30 spyware makers, which suggests the commercial surveillance-as-a-service industry is strong. 



Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown

Daily Tech Digest - June 27, 2022

Collaboration Is a Key Skill. So Why Aren’t We Teaching It?

Beyond an organization’s bottom line, positive workplace relationships matter to individuals’ well-being. Whether respondents’ relationships with their most liked, least liked, or most influential collaborators were being rated, the quality of their collaborative relationships positively predicted job satisfaction, good mental health, and positive attitudes about workplace collaboration. Having even one low-quality collaborative relationship may drive undesirable outcomes, including poor mental health that contributes to burnout, and job dissatisfaction that contributes to turnover. Given that collaborative relationship quality is important both to individuals and to bottom lines, why don’t organizations provide more opportunities for people to develop collaborative skills? It could be that companies do, in fact, make development opportunities available but that individuals fail to see those opportunities as either available or related to collaboration. Or it could be that such offerings are precluded by underlying assumptions that people pick up relationship skills via osmosis rather than direct training, that they are just naturally “good” or “not good” at relationships, or that these skills cannot be learned.


The Best Raspberry Pi 4 Alternatives

The Tinkerboard’s processor is more powerful than the one you’ll find in the Pi 4 B, so you may be able to get even more ambitious with your builds. However, when they’re available, you can get Pi 4s with up to 8 GB of RAM, which is more than the 2 GB that the Tinkerboard offers. Then there is the price. You can pick up a Tinkerboard S R2.0 on Amazon for $149.99 — which is more than some of the inflated Pi 4s are currently selling for. In short, this is a good option if you need more processing power or you can’t find a Pi 4, even at a premium. ... The Linux-powered ODROID XU4Q benefits from “Samsung Exynos5422 Cortex-A15 2Ghz and Cortex-A7 Octa core CPUs” along with 2GB of DDR3 RAM. On paper, this potentially makes the UX4Q the most powerful micro-computer on this list. It also comes with a very large heatsink attached, presumably to soak up some of the heat from its relatively powerful processor. With regards to ports, ODROID has managed to cram two USB 3.0, one USB 2.0, a Gigabit Ethernet, and an HDMI port onto the tiny board.


Google’s AI spotlights a human cognitive glitch: mistaking fluent speech for fluent thought

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs. The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. However, in the case of AI systems, it misfires – building a mental model out of thin air. A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.” The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible.


Cloudflare's outage was human error. There's a way to make tech divinely forgive

What's the lesson? It's not news that people make mistakes, and the more baroque things become, the harder they are to guard against. It's just that what gets advertised on BGP isn't just routes but things crapping out, and when you're Cloudflare that's what the C in CDN becomes. It's not the first time it's happened, nor the last, and one trusts the company will hire a choreographer to prevent further op-on-op stompfests. Yet if it happens, and keeps happening, why aren't systems more resilient to this sort of problem? You can argue that highly dynamic and structurally fluid routing mechanisms can't be algorithmically or procedurally safeguarded, and we're always going to live in the zone where the benefits of pushing just a bit too hard for performance is worth the occasional chaotic hour. That's defeatist talk, soldier. There's another way to protect against the unexpected misfire, other than predicting or excluding. You'll be using it already in different guises, some of which have been around since the dawn of computer time: state snapshotting. 


Stronger Security for Smart Devices To Efficiently Protect Against Powerful Hacker

AttacksResearchers are racing against hackers to develop stronger protections that keep data safe from malicious agents who would steal information by eavesdropping on smart devices. Much of the effort into preventing these “side-channel attacks” has focused on the vulnerability of digital processors. Hackers, for example, can measure the electric current drawn by a smartwatch’s CPU and use it to reconstruct secret data being processed, such as a password. MIT researchers recently published a paper in the IEEE Journal of Solid-State Circuits, which demonstrated that analog-to-digital converters in smart devices, which encode real-world signals from sensors into digital values that can be processed computationally, are vulnerable to power side-channel attacks. A hacker could measure the power supply current of the analog-to-digital converter and use machine learning algorithms to accurately reconstruct output data. Now, in two new research papers, engineers show that analog-to-digital converters are also susceptible to a stealthier form of side-channel attack, and describe techniques that effectively block both attacks. 


What is AI governance?

It can be helpful to break apart the governance of AI algorithms into layers. At the lowest-level, close to the process are the rules of which humans have control over the training, retraining and deployment. The issues of accessibility and accountability are largely practical and implemented to prevent unknowns from changing the algorithm or its training set, perhaps maliciously. At the next level, there are questions about the enterprise that is running the AI algorithm. The corporate hierarchy that controls all actions of the corporation is naturally part of the AI governance because the curators of the AI fall into the normal reporting structure. Some companies are setting up special committees to consider ethical, legal and political aspects of governing the AI. Each entity also exists as part of a larger society. Many of the societal rule making bodies are turning their attention to AI algorithms. Some are simply industry-wide coalitions or committees. Some are local or national governments and others are nongovernmental organizations. All of these groups are often talking about passing laws or creating rules for how AI can be leashed.


Continuous Operations is the Unsung Hero of DevOps

For continuous operations to be successful, you must have infrastructure automation in place. In fact, continuous operations cannot exist without infrastructure automation. The true value that arises from the combination of infrastructure automation and continuous operations is that it gives back IT operations teams their time so they can focus on more complex reasoning or problem-solving tasks while the system simply continuously scans and fixes errors. ... The very essence of DevOps is constant change. Continuous operations may ultimately return your infrastructure to its desired state, but philosophically, it’s about being able to quickly and securely identify anomalies, apply fixes and modify your infrastructure as quickly as possible. It’s not as simple as flipping a switch or pushing a line of code. As the demand for security and compliance swells, continuous operations will have to build in these elements to be de facto checkboxes in the loop. At Puppet, we’ve baked continuous compliance and security into our infrastructure automation products to ensure continuous operations are indeed continuous. 


OT security: Helping under-resourced critical infrastructure organizations

The biggest problem in OT security is the cultural divide between IT and OT. IT security is a mature field, with standards, frameworks, and an abundance of mature and emerging technologies. The OT security field is much less mature, lacking people with OT security experience, established best practices and frameworks, and with a much smaller selection of security technologies. Historically, IT and OT have worked independently on security, with OT engineers overseeing security in the OT environment where it was not as critical due to lack of or limited connectivity to the internet and to the enterprise. Today, however, most OT environments are connected to the enterprise IT environment and to the internet. The benefits of Industry 4.0 and digital transformation in OT has accelerated connectivity in OT, including to cloud environments. The prevalence of converged IT/OT environments makes it imperative that IT and OT teams work together to secure them. The problem is that cultural divide. The good news is that it can be conquered, by bringing the two teams together to create an OT security strategy that is owned jointly by both teams. 


Why to Create a More Data-Conscious Company Culture

A data culture creates standards for employee data literacy and provides open and transparent access to what assets exist, as well as standards for curation, quality, and certification so employees have a shared understanding of the data within an organization. “This will not resolve the silos, but it will create a transparent view of the entire enterprise data fabric,” Wills explains. He adds some of the approaches Alation has seen work well include things like providing an enterprise-wide data literacy training and certification program, so that everyone shares the same perspective, vocabulary, and basic analytic skills. Each functional business unit and area should include data training as part of their employee onboarding as it provides a review of an organization’s authoritative data and data-related assets, the process used to maintain them, and sets expectations for how employees should participate. “Also, recognition: Nothing motivates more and sends a stronger message than employees seeing each other be recognized and rewarded for their contributions,” Wills says. 


Valuing commercial real estate in the metaverse

One of the most obvious ways how digital real estate diverges from its physical counterpart is in the limited utility that it provides. This, of course, is because digital products do not require storage, nor do the digital people who populate the metaverse need to be kept comfortable or warm in indoor venues. However, the sense of discovery in the search for goods and services remains genuine within the metaverse, and it is in this way that virtual utility provides the most value. As businesses are free to design their purchased real estate however they want to, they can dedicate their efforts to creating the most eye-catching and exciting facades that will entice users to discover more about their property – and ultimately the goods and services they have on offer. Therefore, it is not so much about the utility of a piece of real estate that determines its valuation – but more about its network power. For example, how easy is it to discover this real estate? How well connected is it? What is the purchase power of the people coming to the piece of real estate? In this sense, valuing real estate in the metaverse, I’d argue, is a lot more like valuing a website, i.e., how many clicks does it get? 



Quote for the day:

"Leadership - mobilization toward a common goal." -- Gary Wills

Daily Tech Digest - June 26, 2022

Only 3% of Open Source Software Bugs Are Actually Attackable, Researchers Say

Making the determination of what's attackable comes by looking beyond the presence of open source dependencies with known vulnerabilities and examining how they're actually being used, says Manish Gupta, CEO of ShiftLeft. "There are many tools out there that can easily find and report on these vulnerabilities. However, there is a lot of noise in these findings," Gupta says. ... The idea of analyzing for attackability also involves assessing additional factors like whether the package that contains the CVE is loaded by the application, whether it is in use by the application, whether the package is in an attacker-controlled path, and whether it is reachable via data flows. In essence, it means taking a simplified threat modeling approach to open source vulnerabilities, with the goal of drastically cutting down on the fire drills. CISOs have already become all too familiar with these drills. When a new high-profile supply chain vulnerability like Log4Shell or Spring4Shell hits the industry back channels, then blows up into the media headlines, their teams are called to pull long days and nights figuring out where these flaws impact their application portfolios, and even longer hours in applying fixes and mitigations to minimize risk exposures.


The Power and Pitfalls of AI for US Intelligence

Depending on the presence or absence of bias and noise within massive data sets, especially in more pragmatic, real-world applications, predictive analysis has sometimes been described as “astrology for computer science.” But the same might be said of analysis performed by humans. A scholar on the subject, Stephen Marrin, writes that intelligence analysis as a discipline by humans is “merely a craft masquerading as a profession.” Analysts in the US intelligence community are trained to use structured analytic techniques, or SATs, to make them aware of their own cognitive biases, assumptions, and reasoning. SATs—which use strategies that run the gamut from checklists to matrixes that test assumptions or predict alternative futures—externalize the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in the secret competition between nation-states not all facts are known or knowable. But even SATs, when employed by humans, have come under scrutiny by experts like Chang, specifically for the lack of scientific testing that can evidence an SAT’s efficacy or logical validity.


Data Modeling and Data Models: Not Just for Database Design

The prevailing application-centric mindset has caused the fundamental problems that we have today, Bradley said, with multiple disparate copies of the same concept in system after system after system after system. Unless we replace that mindset with one that is more data-focused, the situation will continue to propagate, he said. ... Models have a wide variety of applicable uses and can present different levels of detail based on the intended user and context. Similarly, a map is a model that can be usedlike models are used in a business. Like data models, there are different levels of maps for different audiences and different purposes. A map of the counties in an election will provide a different view than a street map used for finding an address. A construction team needs a different type of detail on a map they use to connect a building to city water, and a lesson about different countries on a globe uses still another level of detail targeted to a different type of user. Similarly, some models are more focused on communication and others are used for implementation.


Microverse IDE Unveiled for Web3 Developers, Metaverse Projects

"With Microverse IDE, developers and designers collaboratively build low-latency, high-performance multiuser Microverse spaces and worlds which can then be published anywhere," the company said in a June 21 news release. As part of its Multiverse democratization effort, Croquet has open sourced its Microverse IDE Metaverse world builder and some related components under the Apache License Version 2.0 license so developers and adopters can examine, use and modify the software as needed. ... The California-based Croquet also announced the availability of its multiplane portal technology, used to securely connect independent 3D virtual worlds developed by different parties, effectively creating the Metaverse from independent microservices. These connections can even span different domains, the company said, thus providing safe, secure and decentralized interoperability among various worlds independent of the large technology platforms. "Multiplane portals solve a fundamental problem in the Metaverse with linking web-based worlds in a secure and safe way," the company said.


5 Firewall Best Practices Every Business Should Implement

Changes that impact your IT infrastructure happen every single day. You might install new applications, deploy additional network equipment, grow your user base, adopt non-traditional work practices, etc. As all this happens, your IT infrastructure’s attack surface will also evolve. Sure, you can make your firewall evolve with it. However, making changes to your firewall isn’t something you should take lightly. A simple mistake can take some services offline and disrupt critical business processes. Similarly, you could also expose ports to external access and compromise their security. Before you apply changes to your firewall, you need to have a change management plan. The plan should specify the changes you intend to implement and what you hope to achieve. ... Poorly configured firewalls can be worse than having no firewall, as a poorly installed firewall will give you a false sense of security. The same is true with firewalls without proper deployment planning or routine audits. However, many businesses are prone to these missteps, resulting in weak network security and a failed investment.


Debate over AI sentience marks a watershed moment

While it is objectively true that large language models such as LaMDA, GPT-3 and others are built on statistical pattern matching, subjectively this appears like self-awareness. Such self-awareness is thought to be a characteristic of artificial general intelligence (AGI). Well beyond the mostly narrow AI systems that exist today, AGI applications are supposed to replicate human consciousness and cognitive abilities. Even in the face of remarkable AI advances of the last couple of years there remains a wide divergence of opinion between those who believe AGI is only possible in the distant future and others who think this might be just around the corner. DeepMind researcher Nando de Freitas is in this latter camp. Having worked to develop the recently released Gato neural network, he believes Gato is effectively an AGI demonstration, only lacking in the sophistication and scale that can be achieved through further model refinement and additional computing power. The deep learning transformer model is described as a “generalist agent” that performs over 600 distinct tasks with varying modalities, observations and action specifications. 


Data Architecture Challenges

Most traditional businesses preserved data privacy by holding function-specific data in departmental silos. In that scenario, data used by one department was not available or accessible by another department. However, that caused a serious problem in the advanced analytics world, where 360-degrees customer data or enterprise marketing data are everyday necessities. Companies, irrespective of their size, type, or nature of business, soon realized that to succeed in the digital age, data had to be accessible and shareable. Then came data science, artificial intelligence (AI), and a host of related technologies that transformed businesses overnight. Today, an average business is data-centric, data-driven, and data-powered. Data is thought of as the new currency in the global economy. In this globally competitive business world, data in every form is traded and sold. For example, 360-degrees customer data, global sales data, health care data, and insurance history data are all available with a few keystrokes. A modern Data Architecture is designed to “eliminate data silos, combining data from all corners of the company along with external data sources.” 


One in every 13 incidents blamed on API insecurity – report

Lebin Cheng, vice president of API security at Imperva, commented: “The growing security risks associated with APIs correlate with the proliferation of APIs, combined with the lack of visibility that organizations have into these ecosystems. At the same time, since every API is unique, every incident will have a different attack pattern. A traditional approach to security where one simple patch addresses all vulnerabilities doesn’t work with APIs.” Cheng added: “The proliferation of APIs, combined with the lack of visibility into these ecosystems, creates opportunities for massive, and costly, data leakage.” ... By the same metric, professional services were also highly exposed to API-related problems (10%-15%) while manufacturing, transportation, and utilities (all 4-6%) are all in the mid-range. Industries such as healthcare have less than 1% of security incidents attributable to API-related security problems. Many organizations are failing to protect their APIs because it requires equal participation from the security and development teams, which have historically have been somewhat at odds. 


What Are Deep Learning Embedded Systems And Its Benefits?

Deep learning is a hot topic in machine learning, with many companies looking to implement it in their products. Here are some benefits that deep learning embedded systems can offer: Increased Efficiency and Performance: Deep learning algorithms are incredibly efficient, meaning they can achieve high-performance levels even when running on small devices. This means that deep learning embedded systems can be used to improve the performance of existing devices and platforms or to create new devices that are powerful and efficient. Reduced Size and Weight: Deep learning algorithms are often very compact and can be implemented on small devices without sacrificing too much performance or capability. This reduces the device’s size and weight, making it more portable and easier to use. Greater Flexibility: Deep learning algorithms can often exploit complex data sets to improve performance. This means deep learning embedded systems can be configured to work with various data sets and applications, giving them greater flexibility and adaptability.


State-Backed Hackers Using Ransomware as a Decoy for Cyber Espionage Attacks

The activity cluster, attributed to a hacking group dubbed Bronze Starlight by Secureworks, involves the deployment of post-intrusion ransomware such as LockFile, Atom Silo, Rook, Night Sky, Pandora, and LockBit 2.0. "The ransomware could distract incident responders from identifying the threat actors' true intent and reduce the likelihood of attributing the malicious activity to a government-sponsored Chinese threat group," the researchers said in a new report. "In each case, the ransomware targets a small number of victims over a relatively brief period of time before it ceases operations, apparently permanently." Bronze Starlight, active since mid-2021, is also tracked by Microsoft under the emerging threat cluster moniker DEV-0401, with the tech giant emphasizing its involvement in all stages of the ransomware attack cycle right from initial access to the payload deployment. ... The key victims encompass pharmaceutical companies in Brazil and the U.S., a U.S.-based media organization with offices in China and Hong Kong, electronic component designers and manufacturers in Lithuania and Japan, a law firm in the U.S., and an aerospace and defense division of an Indian conglomerate.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson

Daily Tech Digest - June 25, 2022

What Are CI And CD In DevOps And How Do They Work?

The purpose of continuous delivery is to put a packed item into production. The whole delivery process, including deployment, is automated using a CD. CD tasks may involve provisioning infrastructure, tracking changes (ticketing), deploying artifacts, verifying and tracking those changes, and ensuring that these changes do not occur if any problems arise. Certain parts of continuous delivery will be used by some firms to help them maintain their operational duties. A good example is employing a CD pipeline to handle infrastructure deployment. Some organizations will leverage their CD pipelines to coordinate infrastructure setup and configuration using configuration management automated processes such as Ansible, chef, or puppet. A CI/CD pipeline may appear to be overhead, but it is not. It is essentially an executable definition of the procedures that any developer must take in order to deliver a new edition of a software product. Without an automated pipeline, developers would have to complete these processes manually, which would be significantly less productive.


Why You Need to Be an Influencer Brand and the 3 Rs of Becoming One

Of course, brands creating content has been around for decades. Content marketing is creating and distributing valuable, relevant and consistent content to attract/retain an audience, driving profitable action. The difference is that influencer brands have shifted their entire orientation to a consumer-centric integrated marketing communications (IMC) mindset. Influencer brands go beyond blogs, infographics, eBooks, testimonials, and how-to guides that appeal to the head. They have learned to appeal to the heart of their audience. This comes from seeing the world from the target's perspective. A shift that can be seen following the three Rs of influence to direct brand content creation. For example, the focus of Yeti Coolers' content and engagement isn't selling coolers. It is selling a lifestyle that the coolers help enable. For example, they organize products so customers can shop by activity. Images and copy lead with stories of the adventures their audience can have with the gear — fishing, hunting, camping, by the coast, in the snow, on the ranch and in the rodeo arena.


3 certification tips for IT leaders looking to get ahead

If leveraged properly, certifications can also assist IT decision-makers in their key leadership responsibilities. For example, Puneesh Lamba, CIO of Shahi Exports, an apparel manufacturing company, acknowledges that “certifications have helped him perform better in board meetings, thereby making it easier to get approvals on IT spending.” “Typically, CIOs from large technology companies have strong IT skills but poor communications skills, while it’s just the opposite for CIOs in customer facing B2C companies. These technology leaders need to get certified in areas that they lack. While CIOs push their team to get certified, they need to come out of their comfort zones and follow suit,” says Chandra. But the benefits of certifications won’t accrue automatically. IT leaders seeking to advance their skills and careers need to build a strategy aimed at squeezing the maximum value out of what certifications can offer. Here, four CIOs share their experiences in pursuing certifications and offer advice on how to make the most of these valuable career advancement tools as an IT leader.


Magnetic superstructures as a promising material for 6G technology

The race to realize sixth generation (6G) wireless communication systems requires the development of suitable magnetic materials. Scientists from Osaka Metropolitan University and their colleagues have detected an unprecedented collective resonance at high frequencies in a magnetic superstructure called a chiral spin soliton lattice (CSL), revealing CSL-hosting chiral helimagnets as a promising material for 6G technology. The study was published in Physical Review Letters. Future communication technologies require expanding the frequency band from the current few gigahertz (GHz) to over 100 GHz. Such high frequencies are not yet possible, given that existing magnetic materials used in communication equipment can only resonate and absorb microwaves up to approximately 70 GHz with a practical-strength magnetic field. Addressing this gap in knowledge and technology, the research team led by Professor Yoshihiko Togawa from Osaka Metropolitan University delved into the helicoidal spin superstructure CSL.


Don’t fall into the personal brand trap

While you can try to emulate the positive qualities of branding, the truth is that rulebook wasn’t designed with you in mind. Brands are static creations, while you must be a dynamic participant in your life and career. Brands let the consensus of others dictate their values and meaning, while you must discover both for yourself. Brands chase consistency by reorienting to match the expectations of “consumers,” while you must have reserve room to grow and develop without a sense of self-fraudulence. Take the personal-branding prescription too far, and you run the risk of cementing your identity to the brand. New passions are unexplored. Fears and struggles must be ignored over concerns of not being “on brand.” And your life endeavors are filtered through the lens of marketability rather than the pursuit of their intrinsic worth.All of which can be counterproductive to your sense of authenticity. As one meta-analysis found, authenticity had a positive relationship with both well-being and engagement. But to achieve that, you must meet yourself as you are today, not who you were 10 years ago when you settled on your personal brand.


Is NextJS a Better Choice Than React in 2022?

If you know, React, you kind of know NextJS. This is because Next is a React framework.
You have components just like in React. CSS has a different naming convention, but that's the biggest change. The reason Next is so good is that it gives you options. If you want a page to have good SEO, you can use ServerSideProps. If you want to use CSR, you can use UseEffect to call your APIs, like React. Adding typescript to your Next project also is very simple. You even have a built-in router and don't have to use React router. The option to choose between CSR, SSR, and SSG is what makes Next the best. You even get a free trial on Vercel for your Next project. Now that you're convinced that you should Next.js, you might wonder how to change your existing website to Next. Next.js is designed for gradual adoption. Migrating from React to Next is pretty straightforward and can be done slowly by gradually adding more pages. You can configure your server so that everything under a specific subpath points to the Next.js app. If your site is abc.com, you can configure abc.com/about to serve a Next.js app. This has been explained really well in the Next.js docs.


How machine learning AI is going to change gaming forever

Obviously, machine learning techniques have broad implications for almost every sector of life, but how they will intersect across gaming has potentially some of the broadest implications for Microsoft as a business. One problem the video game industry generally faces right now pertains to the gap between expectations and investment. Video games are becoming increasingly complex to make, fund, and manage, as they explode in exponential complexity and graphical fidelity. We've seen absolutely insane Unreal Engine demos that showcase near-photorealistic scenes and graphics, but the manual labor involved to produce a full game based on some of these principles is truly palpable both in terms of time, and expense. What is typically thought of as "AI" in a gaming context generally hasn't been AI in the true sense of the word. Video game non-player characters (NPCs) and enemies generally operate on a rules-based model that often has to be manually crafted by a programmer. Machine learning models are importantly far more fluid, able to produce their own rules within parameters, and respond dynamically to new information on the fly.


Reflections about low-code data management

As more people began using the Internet, better tools and resources became available. Today, the market is full of low-code Content Management Systems (CMS) and drag-and-drop website builders (WordPress, HubSpot, Shopify, Squarespace, etc.) that make it easy to create a professional-looking website without any coding knowledge. While there are still a handful of very specific use cases where you would need to code a website from scratch, organizations realized that using a low-code CMS or drag-and-drop builder was a much better option in the vast majority of cases. This shift has led to a dramatic decrease in the amount of time and effort required to build a website. In fact, you can now create an entire website in just a few hours using these low-code tools. With every great shift comes some level of resistance. At first, web developers were skeptical of (or outright opposed to) low-code tools for the following reasons:Fear of Replacement: Developers saw these tools as a threat to their jobs. Power & Flexibility: Developers were unconvinced that they would be powerful, flexible, or customizable enough to produce the same quality of work. 


Inside the Metaverse: Architects See Opportunity in a Virtual World

“The metaverse is not an escape, and it's not a video game,” Patrik Schumacher, principal at Zaha Hadid Architects (ZHA), told RECORD. “It will become the immersive internet for corporations, for education, for retail, and also for socializing and networking in more casual arenas. Everything we are doing in the real world could potentially be substituted or augmented or paralleled with interactions in the metaverse.” ZHA was one of the first major firms to take the plunge into metaverse design. In early March, the firm announced that it would build an entire metaverse city—a digital version of the unrecognized, and as yet unbuilt, sovereign state “Liberland'' that was founded seven years ago by the right-wing Czech politician Vít Jedlička. “At the time, I was very frustrated with planning regulations and overbearing political constraints on city development,” says Schumacher, who has long fought against government intervention in urban development.


5 social engineering assumptions that are wrong

Users may be more inclined to interact with content if it appears to originate from a source they recognize and trust, but threat actors regularly abuse legitimate services such as cloud storage providers and content distribution networks to host and distribute malware as well as credential harvesting portals, according to Proofpoint. “Threat actors may prefer distributing malware via legitimate services due to their likelihood of bypassing security protections in email compared to malicious documents. Mitigating threats hosted on legitimate services continues to be a difficult vector to defend against as it likely involves implementation of a robust detection stack or policy-based blocking of services which might be business relevant,” the report read. ... There’s a tendency to assume that social engineering attacks are limited to email, but Proofpoint detected an increase in attacks perpetuated by threat actors leveraging a robust ecosystem of call center-based email threats involving human interaction over the telephone. “The emails themselves don’t contain malicious links or attachments, and individuals must proactively call a fake customer service number in the email to engage with the threat actor. ...”



Quote for the day:

"The ability to stay calm and polite, even when people upset you, is a superpower." -- Vala Afshar

Daily Tech Digest - June 24, 2022

Toward data dignity: Let’s find the right rules and tools for curbing the power of Big Tech

Enlightened new policies and legislation, building on blueprints like the European Union’s GDPR and California’s CCPA, are a critical start to creating a more expansive and thoughtful formulation for privacy. Lawmakers and regulators need to consult systematically with technologists and policymakers who deeply understand the issues at stake and the contours of a sustainable working system. That was one of the motivations behind the creation of the  >Ethical Tech Project —to gather like-minded ethical technologists, academics, and business leaders to engage in that intentional dialogue with policymakers. We are starting to see elected officials propose regulatory bodies akin to what the Ethical Tech Project was designed to do—convene tech leaders to build standards protecting users against abuse. A recently proposed federal watchdog would be a step in the right direction to usher in proactive tech regulation and start a conversation between the government and the individuals who have the know-how to find and define the common-sense privacy solutions consumers need.


For HPC Cloud The Underlying Hardware Will Always Matter

For a large contingent of those ordinary enterprise cloud users, the belief is that a major benefit of the cloud is not thinking about the underlying infrastructure. But, in fact, understanding the underlying infrastructure is critical to unleashing the value and optimal performance of a cloud deployment. Even more so, HPC application owners need in-depth insight and therefore, a trusted hardware platform with co-design and portability built in from the ground up and solidified through long-running cloud provider partnerships. ... In other words, the standard lift-and-shift approach to cloud migration is not an option. The need for blazing fast performance with complex parallel codes means fine-tuning hardware and software. That’s critical for performance and for cost optimization, says Amy Leeland, director of hyperscale cloud software and solutions at Intel. “Software in the cloud isn’t always set by default to use Intel CPU extensions or embedded accelerators for optimal performance, even though it is so important to have the right software stack and optimizations to unlock the potential of a platform, even on a public cloud,” she explains.


NSA, CISA say: Don't block PowerShell, here's what to do instead

Defenders shouldn't disable PowerShell, a scripting language, because it is a useful command-line interface for Windows that can help with forensics, incident response and automating desktop tasks, according to joint advice from the US spy service the National Security Agency (NSA), the US Cybersecurity and Infrastructure Security Agency (CISA), and the New Zealand and UK national cybersecurity centres. ... So, what should defenders do? Remove PowerShell? Block it? Or just configure it? "Cybersecurity authorities from the United States, New Zealand, and the United Kingdom recommend proper configuration and monitoring of PowerShell, as opposed to removing or disabling PowerShell entirely," the agencies say. "This will provide benefits from the security capabilities PowerShell can enable while reducing the likelihood of malicious actors using it undetected after gaining access into victim networks." PowerShell's extensibility, and the fact that it ships with Windows 10 and 11, gives attackers a means to abuse the tool. 


How companies are prioritizing infosec and compliance

This study confirmed our long-standing theory that when security and compliance have a unified strategy and vision, every department and employee within the organization benefits, as does the business customer,” said Christopher M. Steffen, managing research director of EMA. Most organizations view compliance and compliance-related activities as “the cost of business,” something they have to do to conduct operations in certain markets. Increasingly, forward-thinking organizations are looking for ways to maximize their competitive advantage in their markets and having a best-in-class data privacy program or compliance program is something that more savvy customers are interested in, especially in organizations with a global reach. Compliance is no longer a “table stakes” proposition: comprehensive compliance programs focused on data security and privacy can be the difference in very tight markets and are often a deciding factor for organizations choosing one vendor over another.”


IDC Perspective on Integration of Quantum Computing and HPC

Quantum and classical hardware vendors are working to develop quantum and quantum-inspired computing systems dedicated to solving HPC problems. For example, using a co-design approach, quantum start-up IQM is mapping quantum applications and algorithms directly to the quantum processor to develop an application-specific superconducting computer. The result is a quantum system optimized to run particular applications such as HPC workloads. In collaboration with Atos, quantum hardware start-up, Pascal is working to incorporate its neutral-atom quantum processors into HPC environments. NVIDIA’s cuQuantum Appliance and cuQuantum software development kit provide enterprises the quantum simulation hardware and developer tools needed to integrate and run quantum simulations in HPC environments. At a more global level, the European High Performance Computing Joint Undertaking (EuroHPC JU) announced its funding for the High-Performance Computer and Quantum Simulator (HPCQS) hybrid project. 


Australian researchers develop a coherent quantum simulator

“What we’re doing is making the actual processor itself mimic the single carbon-carbon bonds and the double carbon-carbon bonds,” Simmons explains. “We literally engineered, with sub-nanometre precision, to try and mimic those bonds inside the silicon system. So that’s why it’s called a quantum analog simulator.” Using the atomic transistors in their machine, the researchers simulated the covalent bonds in polyacetylene. According to the SSH theory, there are two different scenarios in polyacetylene, called “topological states” – “topological” because of their different geometries. In one state, you can cut the chain at the single carbon-carbon bonds, so you have double bonds at the ends of the chain. In the other, you cut the double bonds, leaving single carbon-carbon bonds at the ends of the chain and isolating the two atoms on either end due to the longer distance in the single bonds. The two topological states show completely different behaviour when an electrical current is passed through the molecular chain. That’s the theory. “When we make the device,” Simmons says, “we see exactly that behaviour. So that’s super exciting.”


Is Kubernetes key to enabling edge workloads?

Lightweight and deployed in milliseconds, containers enable compatibility between different infrastructure environments and apps running across disparate platforms. Isolating edge workloads in containers protects them from cyber threats while microservices let developers update apps without worrying about platform-level dependencies. Benefits of orchestrating edge containers with Kubernetes include:Centralized Management — Users control the entire app deployment across on-prem, cloud, and edge environments through a single pane of glass. Accelerated Scalability — Automatic network rerouting and the capability to self-heal or replace existing nodes in case of failure remove the need for manual scaling. Simplified Deployment — Cloud-agnostic, DevOps-friendly, and deployable anywhere from VMs to bare metal environments, Kubernetes grants quick and reliable access to hybrid cloud computing. Resource Optimization — Kubernetes maximizes the use of available resources on bare metal and provides an abstraction layer on top of VMs optimizing their deployment and use.


Canada Introduces Infrastructure and Data Privacy Bills

The bill sets up a clear legal framework and details expectations for critical infrastructure operators, says Sam Andrey, a director at think tank Cybersecure Policy Exchange at Toronto Metropolitan University. The act also creates a framework for businesses and government to exchange information on the vulnerabilities, risks and incidents, Andrey says, but it does not address some other key aspects of cybersecurity. The bill should offer "greater clarity" on the transparency and oversight into what he says are "fairly sweeping powers." These powers, he says, could perhaps be monitored by the National Security and Intelligence Review Agency, an independent government watchdog. It lacks provisions to protect "good faith" researchers. "We would urge the government to consider using this law to require government agencies and critical infrastructure operators to put in place coordinated vulnerability disclosure programs, through which security researchers can disclose vulnerabilities in good faith," Andrey says.


Prioritize people during cultural transformation in 3 steps

Addressing your employees’ overall well-being is also critical. Many workers who are actively looking for a new job say they’re doing so because their mental health and well-being has been negatively impacted in their current role. Increasingly, employees are placing greater value on their well-being than on their salary and job title. This isn’t a new issue, but it’s taken on a new urgency since COVID pushed millions of workers into the remote workplace. For example, a 2019 Buffer study found that 19 percent of remote workers reported feeling lonely working from home – not surprising, since most of us were forced to severely limit our social interactions outside of work as well. Leaders can help address this by taking actions as simple as introducing more one-to-one meetings, which can boost morale. One-on-one meetings are essential to promoting ongoing feedback. When teams worked together in an office, communication was more efficient mainly because employees and managers could meet and catch up organically throughout the day.


Pathways to a Strategic Intelligence Program

Strong data visualization capabilities can also be a huge boost to the effectiveness of a strategic intelligence program because they help executive leadership, including the board, quickly understand and evaluate risk information. “There’s an overwhelming amount of data out there and so it’s crucial to be able to separate the signal from the noise,” he says. “Good data visualization tools allow you to do that in a very efficient, impactful and cost-effective manner, and to communicate information to busy senior leaders in a way that is most useful for them.” Calagna agrees that data visualization tools play an important role in bringing a strategic intelligence to life for leaders across functions within any organization, helping them to understand complex scenarios and insights more easily than narrative and other report forms may permit. “By quickly turning high data volumes into complex analyses, data visualization tools can enable organizations to relay near real-time insights and intelligence that support better informed decision-making,” she says. Data visualization tools can help monitor trends and assumptions that impact strategic plans and market forces and shifts that will inform strategic choices.



Quote for the day:

"Patience puts a crown on the head." -- Ugandan Proverb

Daily Tech Digest - June 23, 2022

Microsoft’s framework for building AI systems responsibly

AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date. The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle


Success Demands Sacrifice. What Are You Willing to Give Up?

The key is to preplan your sacrifices rather than sacrifice parts of your life by default. Look at your normal schedule and think about where you could find the extra time and energy for your business, without sacrificing the things you value most in life. Maybe you decide to stay up later after the kids are in bed to get work done. Maybe you stop binge-watching on Hulu so you could get to the gym. Maybe you give up that second round of golf each week to spend more time with your spouse. Maybe you leave the office for a couple of hours to catch your kid's soccer game and come back later. Maybe you sacrifice some money to get extra help in for the business. Maybe you stop micro-managing everything in your business and actually delegate more responsibility to others. We all have areas in where we spend our time that we can tweak. You just have to decide what's right for you. You'll always have to sacrifice something to build a business or accomplish anything extraordinary in life. But giving up what you value most is not a good trade-off. Make sure you're making smart sacrifices by giving up what doesn't matter for things that do.


Microsoft to retire controversial facial recognition tool that claims to identify emotion

The decision is part of a larger overhaul of Microsoft’s AI ethics policies. The company’s updated Responsible AI Standards (first outlined in 2019) emphasize accountability to find out who uses its services and greater human oversight into where these tools are applied. In practical terms, this means Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely. Users will have to apply to use Azure Face for facial identification, for example, telling Microsoft exactly how and where they’ll be deploying its systems. Some use cases with less harmful potential (like automatically blurring faces in images and videos) will remain open-access. ... " “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” wrote Microsoft’s chief responsible AI officer


The Unreasonable Effectiveness of Zero Shot Learning

OpenAI also has something for that. They have OpenAI CLIP, which stands for Contrastive Language-Image Pre-training. What this model does is that it brings together text and image embeddings. It generates an embedding for each text and it generates an embedding for each image, and these inputs are aligned to each other. The way this model was trained is that, for example, you have a set of images, like an image of a cute puppy. Then you have a set of text like, Pepper the Aussie Pup. The way it's trained is that hopefully the distance between the embedding of this picture of this puppy, and the embedding of the text, Pepper the Aussie Pup, that that is really close to each other. It's trained on 400 million image text pairs, which were scraped from the internet. You can imagine that someone did indeed put an image of a puppy on the internet, and didn't write under it, "This is Pepper the Aussie Pup."


Quantum Advantage in Learning from Experiments

Quantum computers will likely offer exponential improvements over classical systems for certain problems, but to realize their potential, researchers first need to scale up the number of qubits and to improve quantum error correction. What’s more, the exponential speed-up over classical algorithms promised by quantum computers relies on a big, unproven assumption about so-called “complexity classes” of problems — namely, that the class of problems that can be solved on a quantum computer is larger than those that can be solved on a classical computer.. It seems like a reasonable assumption, and yet, no one has proven it. Until it's proven, every claim of quantum advantage will come with an asterisk: that it can do better than any known classical algorithm. Quantum sensors, on the other hand, are already being used for some high-precision measurements and offer modest (and proven) advantages over classical sensors. Some quantum sensors work by exploiting quantum correlations between particles to extract more information about a system than it otherwise could have.


How AI is changing IoT

The cloud can’t scale proportionately to handle all the data that comes from IoT devices, and transporting data from the IoT devices to the cloud is bandwidth-limited. No matter the size and sophistication of the communications network, the sheer volume of data collected by IoT devices leads to latency and congestion. Several IoT applications rely on rapid, real-time decision-making such as autonomous cars. To be effective and safe, autonomous cars need to process data and make instantaneous decisions (just like a human being). They can’t be limited by latency, unreliable connectivity, and low bandwidth. Autonomous cars are far from the only IoT applications that rely on this rapid decision making. Manufacturing already incorporates IoT devices, and delays or latency could impact the processes or limit capabilities in the event of an emergency. In security, biometrics are often used to restrict or allow access to specific areas. Without rapid data processing, there could be delays that impact speed and performance, not to mention the risks in emergent situations.


A Huge Step Forward in Quantum Computing Was Just Announced: The First-Ever Quantum Circuit

The landmark discovery, published in Nature today, was nine years in the making. "This is the most exciting discovery of my career," senior author and quantum physicist Michelle Simmons, founder of Silicon Quantum Computing and director of the Center of Excellence for Quantum Computation and Communication Technology at UNSW told ScienceAlert. Not only did Simmons and her team create what's essentially a functional quantum processor, they also successfully tested it by modeling a small molecule in which each atom has multiple quantum states – something a traditional computer would struggle to achieve. This suggests we're now a step closer to finally using quantum processing power to understand more about the world around us, even at the tiniest scale. "In the 1950s, Richard Feynman said we're never going to understand how the world works – how nature works – unless we can actually start to make it at the same scale," Simmons told ScienceAlert. "If we can start to understand materials at that level, we can design things that have never been made before.


How to Handle Third-Party Cyber Incident Response

With tier-1 support, you have someone watching the stuff that is running. Their setup alerts them to the fact that something bad happened. They're gonna turn into a tier-2 person and say, “Hey, can you check this out and see if it really is something bad?” And so the tier-2 person takes a look. Maybe they'll take a look at that laptop or that part of the network or a server. If it wasn't a false alert, and it looks like bad behavior, then it goes to tier 3. Typically, the person running that is much more detailed and technical. They'll do a forensic analysis. And they look at all of the bits that are moving: the communication and what happened. They know adversary tactics, techniques, and procedures (TTP). They’re really good at tracking the adversary in the environment. When you're looking for a third-party incident response, and support agreement, you have to know what you, as a company, have the skills to do. Then you contract out for tier 2 or tier 3. They're going to come in and provide support. Service level agreements are critical. What are you expecting? The more you want, the more you're going to pay. 


IT leadership: 3 ways CIOs prevent burnout

“Prioritize yourself. It is not selfish; it’s an act of self-care. Set aside an ‘hour of power’ every day, first thing in the morning. During this hour, go analog and keep all digital distractions away. Protect that time fiercely and find an activity that nourishes your mind. For instance, learn something new and exciting, read some non-fiction that is energizing and inspiring, journal, or meditate. Find what works for you and do it every day. “Get moving. A healthy mind needs a healthy body. Do something, anything, to get some physical activity into your day. If dancing to disco is your thing, turn up the volume and go for it. Posting it on TikTok is optional, and maybe not advisable. “Stay connected. You are not alone – no matter what you’re going through, someone else has experienced it. Showing vulnerability is not a weakness, it is a strength. Build and nurture a close group of trusted advisors, preferably outside your company. Build relationships before you need them. Don’t be afraid to ask for help. They can help you work through challenges and provide an avenue to help others on this journey.”


Zscaler Posture Control Correlates, Prioritizes Cloud Risks

Zscaler Posture Control wants to make it easier for developers to take a hands-on approach to keeping their companies safe and incorporate best security practices during the development stage, according to Chaudhry. He says Zscaler hopes that 10% of its more than 5,600 customers will be using the company's entire cloud workflow protection offering within the next year. "Doing patch management after the application is built is extremely hard," Chaudhry says. "It was important for us to make sure that the developers are taking a more active role in their part of the security implementation." Zscaler wants to learn from the 210 billion transactions it processes daily to better remediate risk on an ongoing basis, addressing everything from unpatched vulnerabilities and overprivileged entitlements to Amazon S3 buckets that have erroneously been left open, Chaudhry says. Zscaler will put data points from these transactions into its artificial intelligence model to better protect customers going forward.



Quote for the day:

"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker