Daily Tech Digest - February 06, 2023

Preparing for Compliance With AI, Data Privacy Laws

Even though enforcement of data privacy laws in California and New York laws have been slightly delayed, and California regulations implementing the new AI law are not yet fully baked, businesses should be employing expert consultants now to be ready when enforcement begins. Platz notes that in the working world -- and especially in an environment that is often largely remote with employees around the country and the world -- these new privacy laws will affect employees beyond the states that enacted the laws if they live and work in different locations. “With flexibility to work from virtually anywhere, this legislation will have wide reaching impact across states and sectors and will only highlight the need for employers to look closely at their path to compliance across a significant amount of data,” Platz says. ... “As almost always happens, many other jurisdictions will follow suit, as New York City already has,” he says. “So, businesses should be preparing to deal not just with these two new laws but, ultimately, with similar ones in most or all states and perhaps other cities.”


While governments pass privacy laws, companies struggle to change

No single approach can ward off all dangers—it takes a potent combination of technologies, policies, and practices, all with boardroom support. Remember, employees often represent the weakest link in the data security chain since a simple phishing email can bypass the most sophisticated defenses. Strong protection starts with practical training and enforcement. Management can also help ensure every strategy builds on a solid foundation. Many enterprises are now engaged in major digital transformation and cloud migration initiatives. However, some still need help answering basic questions: Do we know where every piece of data in the house resides? Do we know how much of it contains PII, and who has access to it? How is the data managed in the cloud? What kind of encryption has been applied? Where are the encryption keys stored, and who has access to those? ... This way, there are no shared network resources, and the enhanced security is matched with greater flexibility to ensure a company-specific deployment—a dedicated cloud tenant and custom software to address specific needs.


Is the Answer to Your Data Science Needs Inside Your IT Team?

Allowing data scientists and developers to work together in real time provides multiple benefits. First, it allows for more expeditious and agile development of intelligent apps. Second, it allows developers and data scientists to learn about each other’s needs and processes. When each group is so closely connected and understands each other, it improves the chances of project success. Agile application development requires everyone to work in sync. When Red Hat began exploring ways to bridge the gap that has traditionally existed between developers and data scientists, we expanded on the idea of creating a common platform for real-time collaboration between them. Within this common platform, development and data science teams would have access to all the tools they need to perform their tasks, and could quickly build and share production pipelines. ... Open Data Hub was so effective at solving our internal data science and development challenges that we ultimately evolved it into a commercial offering called Red Hat OpenShift Data Science. 


20 Ways to Achieve Street Smart Wisdom for Leaders and Entrepreneurs

The necessity of cultivating an open mindset and being able to adjust to changing circumstances and obstacles swiftly is highlighted by adaptive thinking. To succeed, leaders need to be able to think quickly on their feet and modify their plans as necessary. Adaptive thinking focuses on maintaining persistence and focus in the face of difficulty. The need to think outside the paradigm and come up with unique solutions to challenging problems is emphasized under creative problem-solving. To create novel solutions, leaders need to be able to spot trends and think creatively. It underlines how important it is to be abreast of recent trends and advancements. Lastly, strategic planning emphasizes the need for a well-thought-out strategy and the capacity to picture the desired outcome. Leaders must be able to foresee possible difficulties and be ready to modify their plans as necessary. This highlights the need to maintain organization and concentrate on long-term objectives.


The Case for a Strong Data Governance Program in 2023

Effective data governance is also critical for complying with data-focused regulations, especially data privacy laws. Following in the steps of the EU’s General Data Protection Regulation, several U.S. states have introduced privacy laws, with more states poised to do the same. Existing regulations include California’s Privacy Rights Act and Consumer Privacy Act, along with similar regulations in Colorado, Connecticut, Utah, and Virginia. In addition, because many organizations today anticipate incorporating artificial intelligence into decision making, they must make efforts to comply with emerging AI regulations. The standard-bearer is the EU’s AI Act, which aims to prevent potential data misuse and privacy violations. Acts like these depend on organizations adopting strong data governance practices. Clearly, every company today must have a data governance program. Lack of one can cause data inconsistencies, complicate data integration efforts, and create data integrity challenges. These issues can lead to a slew of negative outcomes: reputational damage, fines for noncompliance, reduced efficiency, and, of course, missed opportunities for business growth.


Government plans to catch tax fraudsters with help of AI

Cabinet Office minister Baroness Neville-Rolfe said fraud against “the public purse is unacceptable and we’re stepping up the fight against those who wish to profit off the backs of taxpayers”. “Through the use of cutting-edge technology, the PSFA will use data and AI to help us in the fight against fraudsters,” she added. The government previously signed another deal with Quantexa, in October 2021, to help combat Covid-19 loan scheme fraud. During the pandemic, fraudsters abused the government’s loan scheme, with a number of businesses making fraudulent claims. The contract with Quantexa was part of the government’s response to those criminal activities. As part of the contract, the government used Quantexa’s Contextual Decision Intelligence (CDI) platform, which enables customers to “create a connected view of [their] data to reveal relationships between people, places and organisations”. It analysed an initial set of 250 networks of people, organisations and places, processing more than 100 million data items.


Insurance IT leaders herald new era for digital customer experience

With new platforms evolving, insurance CIOs are eyeing new possibilities for the future. Liberty Mutual, which has been an industry leader in digital transformation, operates a hybrid cloud infrastructure built primarily on Amazon Web Services but with specific uses of Microsoft Azure and, lesser so, Google Cloud Platform. ... The insurance company under his direction spent 17 years developing a robust platform that today enables consumers to access an automated claims system that uses chatbots, cameras, and e-mail to initiate a claim and rent a car while a machine learning model analyzes the photograph of the damaged vehicle to detect whether its airbag has been deployed, for instance, and to determine immediately whether a vehicle is totaled or the damage is limited to a fender bender. That’s today. The platform will enable data scientists to build the next generation of applications for its consumers tomorrow. “We’re really trying to understand the metaverse and what it might mean for us,” said McGlennon. 


Lambda Throttling - How to Avoid It?

When your lambda is throttled and you reach the maximum parallel execution limit, lambda returns a throttling error. Lambda has a retry mechanism with exponential backoff that starts from 1 second and reaches a max of 5-minute windows which can even run for 6 hours (by default), to try to complete the execution of a failed event. We should also mention that for better error-proofing your code, you could use a dead-letter queue (DLQ) which other queues can target for messages that can’t be processed / consumed successfully. A DLQ is for the cases it still fails to execute, but that is just for reference, and we will not dive into that now. The meaning of this is tremendous. It doesn’t matter if we send a message with SQS, Eventbridge, or other async services; you will practically never need to think about handling throttling issues. ... However, in contrast to synchronous invocation, this will not impact your application and service level agreement (SLA), as the events will be kept in the internal Lambda service queue and handled in time when the resources have freed up to manage them. Every single one of them.


Will your incident response team fight or freeze when a cyberattack hits?

CISOs shouldn’t be surprised to hear that even well-prepared teams can have moments of paralysis; it’s just human nature, McKeown says. She says sometimes responders may experience cognitive narrowing, where they’re so focused on the situation directly in front of them that they can’t consider the full circumstances—an experience that can stop responders from thinking as they normally would. Niel Harper, an enterprise cybersecurity leader who serves as a board director with the governance association ISACA, witnessed a team freeze in response to a ransomware attack on his first day working with a company as an advisor. “They literally did not know what to do, even though they had some experience with [incident response] walkthroughs,” he recalls. “They were in panic mode.” Harper says he has seen other situations where the response was stymied and thus delayed. In some cases, teams were afraid that they’d be seen as overreacting. In others, they were paralyzed with the fear of being blamed. 


Why 2023 is the time to consider security automation

Security automation done right doesn’t usually mean replacing human intelligence and ability – rather, it aims to give people the requisite power to strengthen the organization’s security posture and mitigate threats. Security automation doesn’t necessarily have to be exotic. Especially if you’re just starting out, some of the simplest automation can have considerable impacts. ... “Over the last several years, engineering teams have automated nearly all of their development and deployment processes across APIs in CI/CD pipelines and unfortunately, security has oftentimes been an afterthought,” says Paul Nguyen, co-founder and co-CEO of Permiso. “Accordingly, attackers have leveraged stolen API keys and compromised service tokens as methods to infiltrate a network or service and move laterally.” The course correction isn’t to dump DevOps and CI/CD pipelines, obviously – it’s to better secure them, and automation is key. So is DevSecOps. “It’s time for security teams to embrace automation and bolster their defenses in order to be able to respond to the modern tactics of bad actors,” Nguyen says.



Quote for the day:

"You can't delegate accountability" -- Gordon Tredgold

Daily Tech Digest - February 05, 2023

Cloud security top risk to enterprises in 2023, says study

Indeed, about two-thirds of UK respondents told PwC they had not yet fully mitigated the risks associated with digital transformation, in spite of the potential cost, and reputational damage, of an incident – 27% of global chief financial officers who took part said they had experienced an incident in the past three years that had cost over $1m. On a brighter note, there does seem to be plenty of money available to help, which runs contrary to forecasts from analysts at Forrester, who predicted a 3.6% decline in general IT spending this year as organisations face a budget shortfall. Cyber security seems relatively unaffected by PwC’s metrics, with 59% of UK respondents saying they expect their security budgets to increase. ... At just under half of UK organisations, a “catastrophic” cyber incident was held to be the top risk scenario they faced, ahead of both global recession or a resurgence of a new Covid-19 variant. PwC said this echoed the findings of a previous study of CEOs that found 64% of UK leaders were “extremely or very concerned” about cyber attacks hitting their ability to conduct business.


Projecting Cybersecurity in 2023

While cloud-based data storage can be equipped with cybersecurity measures to prevent data breaches, if an enterprise hosts a large amount of valuable customer data, even a partial breach can have far-reaching negative effects. This is because an organization’s cloud storage contains enormous hordes of extraordinarily valuable data, if an attacker gains access to merely a fraction of these data, it can cause significant damage. An example of this was the Revolut data breach in September 2022. ... Though remote work is nothing new, it will continue to be a security concern in the coming year. Hackers will become more innovative in their approaches to targeting remote workers. Enterprises are also struggling with ensuring privacy as their teams become more scattered geographically. Remote employment frequently results in an increase in ransomware, phishing and social engineering attacks. To address attacks related to remote workplaces, organizations must adopt zero trust policies, assuming that every device and user is a possible attacker. Zero trust is a relatively new practice, but it is gaining traction as one of the key points of


Enterprises turn to single-vendor SASE for ease of manageability

"There’s a significant market opportunity to bring traditionally enterprise-grade security services to the midmarket and to small and medium-sized business," he said. "For many smaller companies, SASE is an opportunity for an all-in-one security and networking solution that allows them to offer more advanced security without the complexity or price tag of standalone solutions." Gartner has also been seeing growing interest from clients for single-vendor SASE platforms, said analyst Andrew Lerner, who covers enterprise networking for the research firm. Small companies without separate security and networking teams are particularly interested in single-vendor solutions, as are companies large enough to have architecture teams. "Architecture teams sit above the day-to-day operations," Lerner said. As a result, they can see the challenges associated with using multiple vendors. "Those challenges include multiple points of integration, multiple policies, multiple management planes, multiple points of presence," Lerner said. "That all has to be tied together, and that creates administrative inefficiency and inefficient traffic flows."


Google is feeling the ChatGPT threat, and here's its response

The company has reportedly been scrambling to put together a redesigned Search home page that includes multiple sections for back and forth questions between the user and a Google-made chatbot like ChatGPT, but combined with traditional search results. Google now appears ready to show off what it's been working on, though it remains to be seen whether it's "Apprentice Bard", the chatbot its been reportedly testing internally that uses Google's own LaMDA conversational chatbot technology. According to The Verge, Google has also sent media invites to an event on Wednesday, February 8 where it will explain how it's "using the power of AI to reimagine how people search for, explore and interact with information, making it more natural and intuitive than ever before to find what you need." The event will be streamed on YouTube at 8:30am ET. The increased openness appears to reflect an effort at Google to remind the world that it has been at the forefront of AI research for the past decade and remains relevant as questions mount about ChatGPT's impact on Google's Search business. That's as Microsoft suddenly seems to have a wider opening with beyond the enterprise via its large stake in OpenAI.


iSIMs imminent? What the evolution of SIM cards means for enterprise IoT

As more businesses and industries around the world begin to commit to deploying massive IoT solutions, we will see a gradual growth in global iSIM adoption to support it. Another piece of the IoT puzzle is private 5G networks, which are also making big strides towards mass deployment. Private 5G is going to be crucial in supporting the connectivity demands of mMTC applications, delivering the “smart factories” and “smart airports” that have been talked about for some time. iSIMs will make it easier and more cost-effective for businesses to make this happen, meaning industry 4.0 is finally on the horizon. However, there is a drawback with iSIMs that businesses and device manufacturers will have to navigate. Because the SIM is directly built into the device, it means product development timelines are likely to be longer. Rather than the fairly “plug and play” nature of a SIM or eSIM, iSIMs will have to be progressively integrated into the IoT solutions. With that in mind, when can we expect iSIMs to really claim the SIM throne? While it’s likely that iSIMs will be deployed in the wild by 2024, we may have to wait a little while longer before we reach mass adoption.


Microsoft’s new Teams Premium tier integrates with OpenAI's GPT-3.5

GPT-3.5 will be used to divide Teams meeting recordings into chaptered sections, generate titles and section descriptions, add personalized timeline markers that show when a user joined or left a meeting, as well as highlighting when a name was mentioned and when a screen was shared. Microsoft has long been a supporter of OpenAI, investing $1 billion in the company in 2019 to support its quest to create “artificial general intelligence,” and in 2020, it became the first company to license GPT for inclusion in its own products and services. GPT, which stands for Generative Pretrained Transformer, is a language model developed by OpenAI that uses deep learning techniques for natural language processing (NPL) to generate text that is remarkably similar to human writing. GPT-3.5 is the latest version of the model. In January, Microsoft announced the third phase of its long-term partnership with OpenAI, with a multiyear, multibillion dollar investment from the tech giant meant to help accelerate breakthroughs in AI, and the ability for Microsoft to access new AI-based capabilities it can resell or build into its products.


Tech workers seek alternative employment to avoid redundancy

With a large number of young people leaving the technology sector for various reasons, and the phrases “the great resignation” and “quiet quitting” gaining traction over the past year, organisations need to focus on ways to draw in new talent and keep the talent they already have. Until recently, a lack of skilled workers, increased use of technology and desperate employers put the power in the hands of jobseekers. But this is changing, with some suggesting the favour will shift towards employers this year. The recession has already seen high-profile tech companies such as Meta, Twitter, Microsoft and Amazon cut jobs in the thousands. When looking at redundancy concerns, CWJobs also looked at data from the Office for National Statistics, which suggests only 1.2% of firms in the “information and communications” sector are planning to let people go over the next three months – less than the average across the UK. Whether a looming threat or just rumours, the likelihood of employees having a “plan B” varies depending on location and age. Some 63% of respondents in London said they were applying for new jobs to protect their future, which is higher than the average.


Companies face data privacy maze, skills gap

“While businesses have invested significant resources into updating privacy protocols and notices to meet the Jan. 1, 2023 effective date for California and Virginia, there is still more work to be done to ensure covered businesses are ready for 2023 privacy compliance obligations,” the alert said. Forty-two percent of the ISACA respondents said their enterprise privacy budget is “somewhat or significantly” underfunded, down from 45% in 2022 and 49% in 2021. The association, which is made up of more than 165,000 professionals who work in IT-related fields, sent survey invitations during the fourth quarter of last year to about 46,000 of its constituents — mainly data privacy and security practitioners. A total of 1,890 respondents completed the survey. While many corporate executives are thinking about the potential fallout from data breaches — which are often in the headlines — there are still significant gaps to fill when it comes to broader data privacy obligations that are rapidly coming into force, according to Kazi. “It is possible to have good security in place but not be doing privacy very well,” she said.


Networking tips for IT leaders: A guide to building connections

Most experts agree you’ll get much more out of an in-person outing. But if budget or time are tight, online conferences can work, Mattson says. If you do opt for a webinar, make sure your camera is on, and comment when you can. “When you participate, people look at you as a go-to person, and that’s how you want to be seen,” Mattson says. “If you’re on mute and don’t look at the camera, that defeats the purpose.” And make sure to take advantage of any online networking opportunities the conference organizers provide. The pandemic has been a boon for online conferences. Megan Duty, vice president of technology and project delivery at Puritan Life, says her time available for networking increased because she was working at home more. “I wasn’t commuting as much and felt these conferences were important,” she says. ... Generally, Duty attends meetings that are relevant to insurance, leadership, women in technology, or those hosted by consulting groups she wants to get to know better. A lot of these forums are back in person, she says, and she traveled a lot during 2022. 


APT groups use ransomware TTPs as cover for intelligence gathering and sabotage

Many of the observed TTPs and collected tools have previously been attributed by other researchers to Kimsuky or Lazarus groups," the WithSecure researchers said in their new report. "The fact that references to both groups are observed could highlight the sharing of tooling and capabilities between North Korean threat actors." The researchers found malware similar to one called GREASE that was previously attributed to Kimsuky, as well as a custom version of In this incident WithSecure observed usage of a malware similar to GREASE, also previously attributed to Kimsuky. Another recovered malware was a custom version of Dtrack, a remote access Trojan (RAT), with a configuration very similar to one used by Lazarus in an attack against the Indian Kudankulam Nuclear Power Plant in 2019. The researchers also found usage of Putty Plink and 3Proxy, two tools previously observed in other Lazarus campaigns. The overlap with BianLian ransomware was the use of a command-and-control server hosted at an IP address previously used by BianLian attackers. 



Quote for the day:

"Any one can hold the helm when the sea is calm." -- Publilius Syrus

Daily Tech Digest - February 01, 2023

Top 6 roadblocks derailing data-driven projects

Making the challenge of getting sufficient funding for data projects even more daunting is the fact that they can be expensive endeavors. Data-driven projects require a substantial investment of resources and budget from inception, Clifton says. “They are generally long-term projects that can’t be applied as a quick fix to address urgent priorities,” Clifton says. “Many decision makers don’t fully understand how they work or deliver for the business. The complex nature of gathering data to use it efficiently to deliver clear [return on investment] is often intimidating to businesses because one mistake can exponentially drive costs.” When done correctly, however, these projects can streamline and save the organization time and money over the long haul, Clifton says. “That’s why it is essential to have a clear strategy for maximizing data and then ensuring that key stakeholders understand the plan and execution,” he says. In addition to investing in the tools needed to support data-driven projects, organizations need to recruit and retain professionals such as data scientists. 


IoT, connected devices biggest contributors to expanding application attack surface

Along with IoT and connected device growth, rapid cloud adoption, accelerated digital transformation, and new hybrid working models have also significantly expanded the attack surface, the report noted.  ... Inefficient visibility and contextualization of application security risks leave organizations in “security limbo” because they don’t know what to focus on and prioritize, 58% of respondents said. “IT teams are being bombarded with security alerts from across the application stack, but they simply can’t cut through the data noise,” the report read. “It’s almost impossible to understand the risk level of security issues in order to prioritize remediation based on business impact. As a result, technologists are feeling overwhelmed by new security vulnerabilities and threats.” Lack of collaboration and understanding between IT operations teams and security teams is having several negative effects too, the report found, including increased vulnerability to security threats and blind spots, difficulties balancing speed, performance and security priorities, and slow reaction times when addressing security incidents.


Firmware Flaws Could Spell 'Lights Out' for Servers

Five vulnerabilities in the baseboard management controller (BMC) firmware used in servers of 15 major vendors could give attackers the ability to remotely compromise the systems widely used in data centers and for cloud services. The vulnerabilities, two of which were disclosed this week by hardware security firm Eclypsium, occur in system-on-chip (SoC) computing platforms that use AMI's MegaRAC Baseboard Management Controller (BMC) software for remote management. The flaws could impact servers produced by at least 15 vendors, including AMD, Asus, ARM, Dell, EMC, Hewlett-Packard Enterprise, Huawei, Lenovo, and Nvidia. Eclypsium disclosed three of the vulnerabilities in December, but withheld information on two additional flaws until this week in order to allow AMI more time to mitigate the issues. Since the vulnerabilities can only be exploited if the servers are connected directly to the Internet, the extent of the vulnerabilities is hard to measure, says Nate Warfield, director of threat research and intelligence at Eclypsium. 


As the anti-money laundering perimeter expands, who needs to be compliant, and how?

Remember: It’s not just existing criminals you’re looking for, but also people that could become part of a money laundering scheme. One very specific category is politically exposed persons (PEP), which refers to government workers or high-ranking officials at risk of bribery or corruption. Another category is people in sanctioned lists, like Specially Designated Nationals (SDN) composed by the Office of Foreign Assets Control (OFAC). They contain individuals and groups with links to high-risk countries. Extra vigilance is also necessary when dealing with money service businesses (MSB), as they’re more likely to become targets for money launderers. The point of all this is that a good AML program must include a thorough screening system that can detect high-risk customers before bringing them onboard. It’s great if you can stop criminals from accessing your system at all, but sometimes they slip through or influence existing customers. That’s why checking users’ backgrounds for red flags isn’t enough. You need to keep an eye on their current activity, too.


Digital transformation: 4 essential leadership skills

Decisiveness by itself is not enough. A strong technology leader needs to operate with flexibility. The pace of change is no longer linear, and leaders have less time to assess and understand every aspect of a decision. Consequently, decisions are made faster and are not always the best ones. Realizing which decisions are not spot-on and being able to adapt quickly is an example of the type of flexibility a leader needs. Another area leaders should understand is when, how, and from whom to take input when making adjustments. For example, leaders shouldn’t rely solely on customer input to make all product decisions. A flexible leader needs to understand the impact on the development teams and support teams as well. In our experience, teams with decisive and flexible leaders are more accepting of change. This is especially true during transformation. Leaders need to know when and how to be decisive to lead their team to success. In tandem, future-ready leaders can adapt to new information and inputs in today’s fast-paced technology environment.


Pathways to a More Sustainable Data Center

“When building a data center to suit today's needs and the needs 20 years in the future, the location of the facility is a key aspect,” he says. “Does it have space to expand with customer growth? Areas to remediate and replace systems and components? Is it in an area that has an extreme weather event seasonally? Are there ways to bring more power to the facility with this growth?” He says these are just a few of the questions that need to be thought of when deploying and maintaining a data center long term. "Technology may be able to stretch the limits of what’s possible, but sustainability starts with people,” Malloy adds. “Employees that implement and follow data center best practices keep a facility running in peak performance.” He says implementing simple things such as efficient lighting, following management-oriented processes and support-oriented processes for a proper maintenance and part replacement schedule increase the longevity of the facility equipment and increase customer satisfaction. 


Enterprise architecture modernizes for the digital era

Although leading enterprise architects see the need for a tool that better reflects the way they work, they also have concerns. “Provenance and credibility are key, so you risk making the wrong decisions as an enterprise architect if there’s no accuracy in the data,” Gregory says of how EAM tools are reliant on data quality. Winfield agrees, adding: “The difficult bit is getting accurate data into the EAM.” Gartner, in its Magic Quadrant for EA Tools, reports that the EAM sector could face some consolidation, too: “Due to the importance and growth in use of models in modern business, we expect to see some major vendors in adjacent market territories make strategic moves by either buying or launching their own EA tools.” Still, some CIOs question the value of adding EAM tools to their technology portfolio alongside IT service management (ITSM) tools, for example. The Very Group’s Subburaj foresees this being a challenge. “Some business leaders will struggle to see the direct business impact,” he says. 


Career path to CTO – we map out steps to take

Successful CTOs will need a range of skills, including technical but also business attributes. “The ability to advise and steer the technology strategy that is right for the business in the current and changing market conditions is crucial,” says Ryan Sheldrake, field CTO, EMEA, at cloud security firm Lacework. “Spending and investing wisely and in a timely manner is one of the more finessed parts of being a successful CTO.” ... “To achieve a promotion to this level, you need both,” she says. “For most of the CTO assignments we deliver, a solid knowledge base in software engineering, technical, product and enterprise architecture is required, as well as knowledge of cloud technologies and information security. From a leadership perspective, candidates need excellent influencing skills, strategic thinking, commercial management skills, and the gravitas to convey a vision and motivate a team.” There are ways in which individuals can help themselves stand out. “One of the critical things I did that really helped me develop into a CTO was to have an external mentor who was already a CTO,” says Mark Benson, CTO at Logicalis UKI. 


How Good Data Management Enables Effective Business Strategies

Data governance should also not be overlooked as an important component of data management and data quality. Sometimes used interchangeably, there are important differences. If data quality, as we’ve seen, is about making sure that all data owned by an organization is complete, accurate, and ready for business use, data governance, by contrast, is about creating the framework and rules by which an organization will use the data. The main purpose of data governance is to ensure the necessary data informs crucial business functions. It is a continuous process of assessing, often through a data steward, whether data that has been cleansed, matched, merged, and made ready for business use is truly fit for its intended purpose. Data governance rests on a steady supply of high-quality data, with frameworks for security, privacy, permissions, access, and other operational concerns. A data management strategy that encompasses the elements described above with respect to data quality will empower a business environment that can successfully achieve and even surpass business goals – from improving customer and employee experiences to increasing revenue and everything in between.


What Is Policy-as-Code? An Introduction to Open Policy Agent

As business, teams, and maturity progress, we'll want to shift from manual policy definition to something more manageable and repeatable at the enterprise scale. How do we do that? First, we can learn from successful experiments in managing systems at scale:Infrastructure-as-Code (IaC): treat the content that defines your environments and infrastructure as source code. DevOps: the combination of people, process, and automation to achieve "continuous everything," continuously delivering value to end users. Policy as code uses code to define and manage policies, which are rules and conditions. Policies are defined, updated, shared, and enforced using code and leveraging Source Code Management (SCM) tools. By keeping policy definitions in source code control, whenever a change is made, it can be tested, validated, and then executed. The goal of PaC is not to detect policy violations but to prevent them. This leverages the DevOps automation capabilities instead of relying on manual processes, allowing teams to move more quickly and reducing the potential for mistakes due to human error.



Quote for the day:

"Those who are not true leaders will just affirm people at their own immature level." -- Richard Rohr

Daily Tech Digest - January 31, 2023

Microsoft says cloud demand waning, plans to infuse AI into products

Microsoft Azure and other cloud services grew 38% in constant currency terms on a year-on-year basis, slowing down by 4% from the previous sequential quarter. “As I noted earlier, we exited Q2 with Azure growth in the mid-30s in constant currency. And from that, we expect Q3 growth to decelerate roughly four to five points in constant currency,” Amy Hood, chief financial officer at Microsoft, said during an earnings call. The growth in cloud number is expected to slow down further through the year, warned Microsoft Chief Executive Satya Nadella. “As I meet with customers and partners, a few things are increasingly clear. Just as we saw customers accelerate their digital spend during the pandemic, we are now seeing them optimize that spend,” Nadella said during the earnings call, adding that enterprises were exercising caution in spending on cloud. Explaining further about enterprises optimizing their spend, Nadella said that enterprises wanted to get the maximum return on their investment and save expenses to put into new workloads.


Why Software Talent Is Still in Demand Despite Tech Layoffs, Downturn and a Potential Recession

We live in a world run by software programs. With increasing digitization, there will always be a demand for software solutions. In particular, software developers are in high demand within the tech industry. In the age of data, firms need software developers who will analyze the data to create software solutions. They will also use the data to understand user needs, monitor performance and modify the programs accordingly. Software developers have skills that prove them valuable in many industries. As long as an industry needs software solutions, a developer can provide and customize them to the firms that need them. ... Many tech workers suffered a terrible blow in 2022. Their prestigious jobs at giant tech firms vanished, leaving many stranded and confused. However, there is still a significant demand for tech professionals in our technological world, particularly software developers. Software development is the bedrock of the tech industry. Software engineers with valuable skill sets, experience and drive will quickly find other positions and opportunities. 


Cybercrime Ecosystem Spawns Lucrative Underground Gig Economy

Improving defenses have forced attackers to improve their tools and techniques, driving the need for more technical specialists, explains Polina Bochkareva, a security services analyst at Kaspersky. "Business related to illegal activities is growing on underground markets, and technologies are developing along with it," she says. "All this leads to the fact that attacks are also developing, which requires more skilled workers." The underground jobs data highlights the surge in activity in cybercriminal services and the professionalization of the cybercrime ecosystem. Ransomware groups have become much more efficient as they have turned specific facets of operations into services, such as offering ransomware-as-a-service (RaaS), running bug bounties, and creating sales teams, according to a December report. In addition, initial access brokers have productized the opportunistic compromise of enterprise networks and systems, often selling that access to ransomware groups. Such division of labor requires technically skilled people to develop and support the complex features, the Kaspersky report stated.


3 ways to stop cybersecurity concerns from hindering utility infrastructure modernization efforts

Cybersecurity is a priority across industries and borders, but several factors add to the complexity of the unique environment in which utilities operate. Along with a constant barrage of attacks, as a regulated industry, utilities face several new compliance and reporting mandates, such as the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Other security considerations include aging OT, which can be challenging to update and to protect, the lack of control over third-party technologies and IoT devices such as smart home devices and solar panels, and finally, the biggest threat of all: human error. These risk factors put extra pressure on utilities, as one successful attack can have deadly consequences. The instance of a hacker attempting to poison (thankfully unsuccessfully) the water supply in Oldsmar, Florida is one example that comes to mind. Utilities have a lot to contend with even before adding data analytics into the mix. However, it is interesting to point out that consumers are significantly less worried about the privacy of data collected by utilities. 


Why cybersecurity teams are central to organizational trust

No business is an island; it depends on many partners (whether formal business partners or some other relationship) – a fact highlighted by the widespread supply chain challenges across many industries over the past couple of years. The security of software supply chains – which is to say, dependencies on upstream libraries and other code used by organizations in their software – is a topic of considerable focus today up to and including from the U.S. executive branch. It’s still arguably not getting the attention it deserves, though. The aforementioned 2023 Global Tech Outlook report found that, among the funding priorities within security, third-party or supply chain risk management came in at the very bottom, with just 12 percent of survey respondents saying it was a top priority. Deb Golden, who leads Deloitte’s U.S. Cyber and Strategic Risk practice, told the authors that there needs to be more scrutiny over supply chains. “Organizations are accountable for safeguarding information and share a responsibility to respond and manage broader network threats in near real-time,” she said. 


Global Microsoft cloud-service outage traced to rapid BGP router updates

The withdrawal of BGP routes prior to the outage appeared largely to impact direct peers, ThousandEyes said. With a direct path unavailable during the withdrawal periods, the next best available path would have been through a transit provider. Once direct paths were readvertised, the BGP best-path selection algorithm would have chosen the shortest path, resulting in a reversion to the original route. These re-advertisements repeated several times, causing significant route-table instability. “This was rapidly changing, causing a lot of churn in the global internet routing tables,” said Kemal Sanjta, principal internet analyst at ThousandEyes, in a webcast analysis of the Microsoft outage. “As a result, we can see that a lot of routers were executing best path selection algorithm, which is not really a cheap operation from a power-consumption perspective.” More importantly, the routing changes caused significant packet loss, leaving customers unable to reach Microsoft Teams, Outlook, SharePoint, and other applications. 


New analog quantum computers to solve previously unsolvable problems

The essential idea of these analog devices, Goldhaber-Gordon said, is to build a kind of hardware analogy to the problem you want to solve, rather than writing some computer code for a programmable digital computer. For example, say that you wanted to predict the motions of the planets in the night sky and the timing of eclipses. You could do that by constructing a mechanical model of the solar system, where someone turns a crank, and rotating interlocking gears represent the motion of the moon and planets. In fact, such a mechanism was discovered in an ancient shipwreck off the coast of a Greek island dating back more than 2000 years. This device can be seen as a very early analog computer. Not to be sniffed at, analog machines were used even into the late 20th century for mathematical calculations that were too hard for the most advanced digital computers at the time. But to solve quantum physics problems, the devices need to involve quantum components. The new Quantum 


Will Your Company Be Fined in the New Data Privacy Landscape?

“Some large US companies are continuing to be dealt pretty significant fines,” she says. “The regulation and fining of companies like Meta and others have raised consumer awareness of privacy rights. I think we’re approaching a perfect storm in the US where the rest of the world is moving toward a more consumer-protective landscape, so the US is following in suit.” This includes activity by state policymakers as well as responses to cybersecurity breaches, Simberkoff says. She sees the conversation on data privacy being driven by increasingly complex regulatory requirements and consumer awareness of data privacy, which can include identity theft or stolen credit card information. “I think, frankly, companies like Apple help that dialogue forward because they’ve made privacy one of their key issues in advertising,” says Simberkoff. The elevation of data privacy policies and consumer awareness might, at first blush, seem detrimental to data-driven businesses, but it could just require new operational approaches. “I think what we’re going to end up seeing is a different way of thinking about these things,” she says. 


What is the role of a CTO in a start-up?

The role of the CTO in a start-up can vary greatly from an equivalent position in a more established scale-up business. While in both scenarios the position concerns leadership of all technological decisions within a business, there are considerable differences in the focus and nature of the role. “Start-ups tend to be disruptive and faced-paced, with the goal of quick growth over long-term strategy development. So, start-up CTOs are often responsible for building the technological infrastructure from the ground up,” said Ryan Jones, co-founder of OnlyDataJobs. “Whereas in an established company, a CTO might be responsible for reviewing and improving the current technology stack and data infrastructure, in a start-up, these structures might not exist. So, the onus is on the CTO to create and implement an entire technological infrastructure and strategy. This also means that a hands-on approach is required. “Because start-up CTOs may be the only technologically minded individual within the company, they’re often required to go back on the tools and do the actual work required themselves rather than delegating to a team.”


Your Tech Stack Doesn’t Do What Everyone Needs It To. What Next?

IT needs to collaborate with citizen developers throughout the process to ensure maximum safety and efficiency. From the beginning, it’s important to confirm the team’s overall approach, select the right tools, establish roles, set goals, and discuss when citizen developers should ask for support from IT. Appointing a leader for the citizen developer program is a great way to help enforce these policies and hold the team accountable for meeting agreed-upon milestones. To encourage collaboration and make citizen automation a daily practice, it’s important to work continuously to identify pain points and manual work within business processes that can be automated. IT should regularly communicate with teams across the business, finance and HR departments to find opportunities for automation, clearly mapping out what change would look like for those impacted. Gaining buy-in from other team leaders is critical, so citizen developers and IT need to become internal advocates for the benefits of automation.Another non-negotiable ground rule is that citizen developers should only use IT-sanctioned tools platforms. 



Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters

Daily Tech Digest - January 30, 2023

How to survive below the cybersecurity poverty line

All types of businesses and sectors can fall below the cybersecurity poverty line for different reasons, but generally, healthcare, start-ups, small- and medium-size enterprises (SMEs), education, local governments, and industrial companies all tend to struggle the most with cybersecurity poverty, says Alex Applegate ... These include wide, cumbersome, and outdated networks in healthcare, small IT departments and immature IT processes in smaller companies/start-ups, vast network requirements in educational institutions, statutory obligations and limitations on budget use in local governments, and custom software built around specific functionality and configurations in industrial businesses, he adds. Critical National Infrastructure (CNI) firms and charities also commonly find themselves below the cybersecurity poverty line, for similar reasons. The University of Portsmouth Cybercrime Awareness Clinic’s work with SMEs for the UK National Cyber Security Centre (NCSC) revealed that cybersecurity was a secondary issue for most micro and small businesses it engaged with, evidence that it is often the smallest companies that find themselves below the poverty line, Karagiannopoulos says.


The Importance of Testing in Continuous Deployment

Test engineers are usually perfectionists (I speak from my experience), that’s why it’s difficult for them to take a risk of issues possibly reaching end users. This approach has a hefty price tag and impacts the speed of delivery, but it’s acceptable if you deliver only once or twice per month. The correct approach would be automating critical paths in application both from a business perspective and application reliability. Everything else can go to production without thorough testing because with continuous deployment, you can fix issues within hours or minutes. For example, if item sorting and filtering stops working in production, users might complain, but the development team could fix this issue quickly. Would it impact business? Probably not. Would you lose a customer? Probably not. These are the risks that should be OK to take if you can quickly fix issues in production. Of course, it all depends on the context – if you’re providing document storing services for legal investigations, it would be a good idea to have an automated test for sorting and filtering.


Why Trust and Autonomy Matter for Cloud Optimization

With organizations beginning to ask teams to do more with less, optimization — of all kinds — is going to become a vital part of what technology teams (development and operations alike) have to do. But for that to be really effective, team autonomy also needs to be founded on confidence — you need to know that what you’re investing time, energy and money on makes sense from the perspective of the organization’s wider goals. Fortunately, Spot can help here too. It gives teams the data they need to make decisions about automation, so they can prioritize according to what matters most from a strategic perspective. “People aren’t really sure what’s going to be happening six, nine, 10 months down the road.” Harris says. “Making it easier for people to get that actionable data no matter what part of the business you’re in, so that you can go in and you can say, ‘Here’s what we’re doing right, here’s where we can optimize’ — that’s a big focus for us.” One of the ways that Spot enables greater autonomy is with automation features. 


Keys to successful M&A technology integration

For large organisations merging together, unifying networks and technologies may take years. But for SMBs (small and medium-sized businesses) utilising more traditional technologies uch as VPNs, integrations may be accomplished more quickly and with less friction. In scenarios where both the acquiring company and the company being acquired utilise more sophisticated SD-WAN networks, these technologies tend to be closed and proprietary in nature. Therefore, if both companies utilise the same vendor, integration can be managed more easily. On the other hand, if the vendors differ, it is not going to interlink with other networks as easily and needs a more careful step-by-step network transformation plan. ... Another key to a successful technology merger is to truly understand where your applications are going. For example, if two New York companies are joining forces, with most of the data and applications residing in the US East Coast, it wouldn’t make sense to interconnect networks in San Francisco. Along with this, it is important to make sure your regional networks are strong, even within your global network. In terms of where you are sending your traffic and data, it’s important to be as efficient as possible.


Understanding service mesh?

Service meshes don’t give an application’s runtime environment any additional features. Service meshes are unique in that they abstract the logic governing service-to-service communication to an infrastructure layer. This is accomplished by integrating a service mesh as a collection of network proxies into an application. proxies are frequently used to access websites. Typically, a company’s web proxy receives requests for a web page and evaluates them for security flaws before sending them on to the host server. Prior to returning to the user, responses from the page are also forwarded to the proxy for security checks. ... But service mesh is an essential management system that helps all the different containers to work in harmony. Here are several reasons why you will want to implement service mesh in an orchestration framework environment. In a typical orchestration framework environment, user requests are fulfilled through a series of steps, where each of the steps is performed by a container Each one runs a service that plays a different but vital role in fulfilling that request. Let us call this role played by each container a business logic.


Chaos Engineering: Benefits of Building a Test Strategy

Many organizations struggle to get visibility into where their most sensitive data is stored. Improper handling of that data can have disastrous consequences, such as compliance violations or trade secrets falling into the wrong hands. “Using chaos engineering could help identify vulnerabilities that, unless remediated, could be exploited by bad actors within minutes,” Benjamin says. Kelly Shortridge, senior principal of product technology at Fastly, says organizations can use chaos engineering to generate evidence of their systems’ resilience against adverse scenarios, like attacks. “By conducting experiments, you can proactively understand how failure unfolds, rather than waiting for a real incident to occur,” she says. The very nature of experiments requires curiosity -- the willingness to learn from evidence -- and flexibility so changes can be implemented based on that evidence. “Adopting security chaos engineering helps us move from a reactive posture, where security tries to prevent all attacks from ever happening, to a proactive one in which we try to minimize incident impact and continuously adapt to attacks,” she notes.


How to get buy-in on new technology: 3 tips

When making a case for new technology, keep your audience in mind. Tailoring your arguments to their role and goals will put you in a much better position to capture their attention and generate enthusiasm. Sometimes this will require you to shift away from strict business goals. If you need to speak with the chief revenue officer and are trying to justify an additional $100,000 for your tech stack, for example, you will need to focus on the bottom line and the financial benefit your proposal could provide. On the other hand, the head of engineering might not be interested in the finances and would rather discuss how engineers can better avoid burnout or otherwise become easier to manage. When advocating for stack improvements, working with a partner helps substantially. It’s good to have a boss or teammate help, but even better to find a leader on a different team or even in another department. If multiple departments have team members who champion a specific improvement, it makes a strong case that there’s a pervasive need for stack enhancements across the entire company.


How organizations can keep themselves secure whilst cutting IT spending

The zero trust network access model has been a major talking point for CIOs, CISOs and IT professionals for some time. While most organizations do not fully understand what zero trust is, they recognize the importance of the initiative. Enforcing principles of least privilege minimizes the impact of an attack. In a zero trust model, an organization can authorize access in real-time based on information about the account they have collected over time. To make such informed decisions, security teams need accurate and up-to-date user profiles. Without it, security teams can’t be 100% confident that the user gaining access to a critical resource isn’t a threat. However, with the sprawl of identity data – stored in the cloud and legacy systems – of which are unable to communicate with each other, such decisions cannot be made accurately. Ultimately, the issue of identity management isn’t only getting more challenging with the digitalization of IT and migration to the cloud – it’s now also halting essential security projects such as zero trust implementation.


Economic headwinds could deepen the cybersecurity skills shortage

Look at anyone’s research and you’ll see that more organizations are turning to managed services to augment overburdened and under-skilled internal security staff. For example, recent ESG research on security operations indicates that 85% of organizations use some type of managed detection and response (MDR) service, and 88% plan to increase their use of managed services in the future. As this pattern continues, managed security service providers (MSSPs) will need to add headcount to handle increasing demand. Since service provider business models are based on scaling operations through automation, they will calculate a higher return on employee productivity and be willing to offer more generous compensation than typical organizations. One aggressive security services firm in a small city could easily gain a near monopoly on local talent. At the executive level, we will also see increasing demand for the services of virtual CISOs (vCISOs) to create and manage security programs in the near term.


2023 Will Be the Year FinOps Shifts Left Toward Engineering

By enabling developers to adopt using dynamic logs for troubleshooting issues in production without the need to redeploy and add more costly logs and telemetry, developers can own the FinOps cost optimization responsibility earlier in the development cycle and shorten the cost feedback loop. Dynamic logs and developer native observability that are triggered from the developer development environment (IDE) can be an actionable method to cut overall costs and better facilitate cross-team collaboration, which is one of the core principles of FinOps. “FinOps will become more of an engineering problem than it was in the past, where engineering teams had fairly free reign on cloud consumption. You will see FinOps information shift closer to the developer and end up part of pull-request infrastructure down the line,” says Chris Aniszczyk, CTO at the Cloud Native Computing Foundation. Keep in mind that it’s not always easy to prioritize and decide when to pull the cost optimization trigger. 



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - January 29, 2023

Data Mesh Architecture Benefits and Challenges

Data mesh architectures can help businesses find quick solutions to day-to-day problems, discover better ways to manage their resources, and develop more agile business models. Here is a quick review of data mesh architecture benefits: The data mesh architecture is adaptable, in the sense that it can adapt to changes as the company scales, changes, and grows: The data mesh enables data from disparate systems to be collected, integrated, and analyzed all at once, thus eliminating the need to extract data from disparate systems in one central location for further processing; Within a data mesh, the individual domain becomes a mini-enterprise and gains the power to self-manage and serve on all aspects of its Data Science and data processing projects; A data mesh architecture allows companies to increase efficiency by eliminating the data flow in a single pipeline, while protecting the system through centralized monitoring infrastructure; The domain teams can design and develop their need-specific, analytics, and operational use cases while maintaining full control of all their data products and services.


Uncovering the Value of Data & Analytics: Transformation With Targeted Reporting

Most of the time, (Cloud) Data & Analytics transformations are initially approved for implementation based on a solid business case with clear return expectations. However, programs often don’t have a functioning value framework to report on the business value generated from change and the progress toward the initial expectations. In such cases, the transformation impact for executives and business leaders is a “black box” with no clear indication of direction. As time passes and the costs associated with transformation programs increase due to scaling, an insufficient Value Reporting Framework can lead to loss in executive buy-in and reduction of investment budgets. Furthermore, with high market volatility, initiatives without a tangible influence on the company’s bottom line tend to be deprioritized quickly. On the more positive side, a high number of companies have robust value scorecards to track their transformation performance. However, metrics in these scorecards tend to be either too operational for executives to easily digest or focus exclusively on cost aspects. 


Elevating Security Alert Management Using Automation

Context — every security analyst says they need it, but everyone seems to have a different definition for it. If you’ve ever worked an alert queue and thought to yourself, “I wish I could stop these alerts from appearing right now” or “Why am I looking at activity that someone else is already triaging,” then this section is for you — within the first two weeks of deployment, this feature of the system reduced our alert volume by 25%, saving 3 to 4.5 hours of manual effort. In our alert management system, “context” is information derived from the alert payload that is used as metadata for suppression¹, deduplication², and metrics. Reduction of toil in the system is primarily attributed to its ability to use context to stop wasteful alerts from getting to the team. This creates the opportunity for the team to, for example, suppress alerts that we know require tuning by a detection engineer or ignore duplicate alerts for activity that is being investigated but may be on hold while we wait for additional information. These alerts are never dropped — they still flow through the rest of the system and generate a ticket — but they are not assigned to a person for triage.


Could A Data Breach Land Your CISO In Prison?

Why would a CISO worry about personally facing legal consequences for company cybersecurity decisions? I don’t have direct knowledge of Kissner’s motives. However, I do know that for the last several months CISOs have been talking to each other about how last October, a federal jury convicted the CISO of a major U.S company for covering up a data breach. The jury found Joe Sullivan, a former Chief Security Officer, guilty of obstructing justice and actively failing to report a felony—charges stemming from “bug bounty” payments he authorized to hackers who breached the company in 2016. The company was already responding to an investigation into a 2014 breach but did not inform the FTC about the new breach in 2016. Sullivan didn’t make that decision alone: others in the company were looped in, including then-CEO Travis Kalanick, the Chief Privacy Officer, and the company’s in-house privacy/security lawyer. Nevertheless, Sullivan was the only employee to face charges. How might CISOs handle their roles differently in a world where a poorly-handled breach won’t just get you fired—it might land you in prison?


The new age of exploration: Staking a claim in the metaverse

Spatial ownership is the essential concept that makes possible an open metaverse and 3D digital twin of the earth that is not built or controlled by a monopolistic entity. Spatial ownership enables users to own virtual land in the metaverse. It uses non-fungible tokens (NFTs), which represent a unique digital asset that can only have one official owner at a time and can’t be forged or modified. In the metaverse, users can buy NFTs linked to particular parcels of land that represent their ownership of these “properties.” Spatial ownership in the metaverse can be compared to purchasing web domains on today’s internet. As with physical real estate, some speculatively buy web domains hoping to sell the rights to a potentially popular or unique URL at a future date. In contrast, others purchase to lock down control and ownership over their own little portion of the web. Domains are similar to prime real estate in that almost every business needs one, and many brands will look for the same or similar names. The perfect domain name can help a business monopolize its market and get the lion’s share of web visibility in its niche.


Empowering Leadership in a VUCA World

The term VUCA (volatility, uncertainty, complexity, and ambiguity) aptly applies to the world we live in. Making business decisions has become incredibly complex, and we’re not just making traditional budget and managerial decisions. More than ever, leaders have to consider community impact, employee wellbeing, and business continuity under an extraordinary uncertainty. There are so many considerations for even the smallest decisions we make. The highly distributed nature of how people work today means we have to consider a broader potential impact of every statement and every choice. Leaders have the responsibility to think about equity when some employees are sitting in the room with you and others are remote. How much face time are you giving each? Are you treating instant messages the with the same level of attention as someone dropping into your office? This situation is not likely to be any less of a challenge for future leaders. It’s our responsibility as leaders, as people who impact the future of our businesses, to give all the people in our organizations an equal opportunity to contribute and grow. 


Using Artificial Intelligence To Tame Quantum Systems

Quantum computing has the potential to revolutionize the world by enabling high computing speeds and reformatting cryptographic techniques. That is why many research institutes and big-tech companies such as Google and IBM are investing a lot of resources in developing such technologies. But to enable this, researchers must achieve complete control over the operation of such quantum systems at very high speed, so that the effects of noise and damping can be eliminated. “In order to stabilize a quantum system, control pulses must be fast – and our artificial intelligence controllers have shown the promise to achieve such a feat,” Dr. Sarma said. “Thus, our proposed method of quantum control using an AI controller could provide a breakthrough in the field of high-speed quantum computing, and it might be a first step to achieving quantum machines that are self-driving, similar to self-driving cars. We are hopeful that such methods will attract many quantum researchers for future technological developments.”


Avoid a Wipeout: How To Protect Organisations From Wiper Malware

A 3-2-1-1 data-protection strategy is a best practice for defending against malware, including wiper attacks. This strategy entails maintaining three copies of your data, on two different media types, with one copy stored offsite. The final 1 in the equation is immutable object storage. By maintaining multiple copies of data, organisations will have backup available in case one copy is lost or corrupted. It is imperative in the event of a wiper attack, which destroys or erases data. Storing data on different media types also helps protect against wiper attacks. This way, if one type of media is compromised, you still have access to your data through the other copies. Keeping at least one copy of your data offsite, either in a physical location or in the cloud, provides an additional layer of protection. If a wiper attack destroys on-site copies of your data, you’ll still have access to your offsite backup. The final advantage is immutable object storage. Immutable object storage involves continuously taking snapshots of your data every 90 seconds, ensuring that you can quickly recover it even during a wiper attack.


How to use Microsoft KQL for SIEM insight

While KQL is easy to work with, you won’t get good results if you don’t understand the structure of your data. First, you need to know the names of all of the tables used in Sentinel’s workspace. These are needed to specify where you’re getting data from, with modifiers to take only a set number of rows and to limit how much data is returned. This data then needs to be sorted, with the option of taking only the latest results. Next, the data can be filtered, so for example, you’re only getting data from a specific IP range or for a set time period. Once data has been selected and filtered, it’s summarized. This creates a new table with only the data you’ve filtered and only in the columns you’ve chosen. Columns can be renamed as needed and can even be the product of KQL functions — for example summing data or using the maximum and minimum values for the data. The available functions include basic statistical operations, so you can use your queries to look for significant data — a useful tool when hunting suspected intrusions through gigabytes of logs. 


Leaders anticipate cyber-catastrophe in 2023, report World Economic Forum, Accenture

“I think we may see a significant event in the next year, and it will be one in the ICS/OT technologies space. Due to long life, lack of security by design (due in many cases to age) and difficulty to patch, in mission critical areas — an attack in this space would have immense effects that will be felt,” France said. “So I somewhat agree with the hypothesis of the report and the contributors to the survey. You could already argue that we have seen a moderate attack with UK Royal Mail, where ransomware stopped the sending of international parcels for a week or more,” France said. France argues that organizations can insulate themselves from these threats by putting more resources into defensive measures and by treating cybersecurity as a board issue. Key steps include Implementing responsive measures, providing employees with exercises on how to react, implementing recovery plans, planning for supply chain instability and looking for alternative vendors who can provide critical services in the event of a disruption.



Quote for the day:

“If we wait until we’re ready, we’ll be waiting for the rest of our lives.” -- Lemony Snicket