Daily Tech Digest - August 17, 2023

What would an OT cyberattack really cost your organization?

Attacks on industrial control systems (ICS) may not be just about ransomware or accessing information but about deliberately making machines misbehave. Attackers can exploit vulnerabilities to make machines overheat, or robotic arms swing unpredictably. A failed attack on a water utility in Florida attempted to raise the amount of lye in the drinking water; success might have killed thousands. ... When operations in your factory, plant, or substation shut down, revenue will cease. So, an important question not just for the CISO, but for Operations, Finance, and other chiefs is how long you can go without the expected revenue that you may never see?  ... There will be significant damage to an organization's public reputation as news of an attack gets out. The customer trust that took years to build may be gone in an instant, and customers forced to find another supplier while you're shut down may not come back. After all, your shutdown not only inflicted damage to companies further down the chain, it may also have created an impression that you were careless in letting it happen.


The Risk of Quantifying Cyberrisk

Legal concerns could stem from the nature of risk quantification. This process is designed to uncover problems with an actionable amount of detail. Anything that is discoverable in a legal proceeding can find its way into a court case and embarrassing fallout may ensue. The fear is that the very detailed CRQ risk assessment results will be made public. For many organizations that have not adopted CRQ, such results may include lists of broken or missing controls and audit results, all with corresponding verbal risk labels (e.g., high, medium, low). They could (and really should) also include a list of scenarios with the same risk labels attached to them. These results alone could be damning to some organizations. Specific CRQ concerns stem from having all of these elements tied to a potential amount of loss and frequency. However, it is difficult to imagine a court proceeding where strictly qualitative results would allow an organization to walk free.


The CISO Report – The Culture Club

The report highlighted a number of key challenges facing organizations in the EMEA, which are clearly now being discussed in the C-Suite. These challenges include the level of regulatory compliance that organisations now face, especially those operating in these regions. In my opinion, the General Data Protection Regulation (GDPR) is still a massively misunderstood piece of legislation that organisations need help with, yet, the C-Suite recognises the importance of it. Added to this is the ongoing threat cybercrime, as organisations large and small are facing an increasing number of cyberattacks, including ransomware attacks, data breaches, and Distributed Denial of Service (DDoS) attacks. ... To embed cybersecurity and data protection within an organisation, you do not look to build a security culture, but rather, you look to build a culture that respects the importance of Security. This is a simple, yet profound distinction. Every organization possesses a culture, which might either emerge naturally or be intentionally and meticulously developed. Regardless of its origins, the influence of this culture on an organization remains undeniable.


AI for Data Management: An Old Idea with New Potential

No matter how you choose to leverage AI in the data management space — whether you're using AI for more basic needs or you're taking advantage of next-generation AI technologies — your goal should be to identify ways that AI can accelerate workflows and reduce toil for data engineers. Much of the work that data engineers perform on a daily basis can be tedious and time-consuming. Converting data from one format to another by hand could take enormous amounts of time and is a boring task, to put it mildly. So is sifting through vast volumes of information to find data quality issues like redundant or empty cells. Even if you leverage tools to help search and sort data automatically, you're still likely to find yourself investing an inordinate amount of time on data quality if you have to write complex queries by hand to detect quality problems. But if you can substitute AI-based workflows for these tasks, you save yourself a lot of time and labor. 


Low-code and no-code: Meant for citizen developers, but embraced by IT

Low and no-code continue to gain popularity because organizations "are realizing that these tools are not just for early-stage or beginner citizen developers but also for sophisticated, senior developers to save them valuable time and effort," says Pulijala. "Low-code/no-code helps, whether it's addressing talent shortages or freeing up other developers' time. With low-code/no-code solutions, a junior product manager can build a basic prototype, freeing up more senior engineers to focus on customized, higher code solutions. In addition to mitigating talent shortages, low-code/no-code tools improve business agility and contribute to cost savings since it significantly reduces hiring costs and application maintenance costs." ... "While no-code solutions are built from the point-of-view of a non-developer user, they will at times still require professional IT intervention. Enterprise applications can be complex and outages can happen, requiring IT to step in to triage and get things running again."


Multiple Flaws Uncovered in Data Center Systems

Data center equipment and infrastructure solutions provider CyberPower's PowerPanel Enterprise DCIM platform allows information technology teams to manage, configure and monitor the infrastructure within a data center through the cloud, serving as a single source of information and control for all devices. "These platforms are commonly used by companies managing on-premises server deployments to larger, co-located data centers - like those from major cloud providers AWS, Google Cloud and Microsoft Azure," the researchers said. Dataprobe manufactures power management products that assist businesses in monitoring and controlling their equipment. The iBoot-PDU allows administrators to remotely manage the power supply to their devices and equipment via a "simple and easy-to-use" web application, according to the researchers, who added that the devices are "typically found in small to midsized data centers and used by SMBs managing on-premises server deployments."


Hybrid mesh firewall platforms gain interest as management challenges intensify

"A hybrid mesh firewall makes you highly dependent on one single vendor," says John Carey, managing director of the technology solutions group at global consulting firm AArete. "Some organizations prefer to have best-of-breed and select the right tool for the right job. You'll see CrowdStrike running alongside CyberArk running alongside Juniper running alongside Cisco. You don't see many organizations doing a blanket removal, taking out all those tools and putting in one. It's costly, and they don't want to be totally dependent on that one vendor." With a hybrid mesh firewall only able to manage firewalls from that one vendor, that could be a problem for those companies. Alternatively, an enterprise can use an NSPM product from a vendor such as Tufin or Firemon, says Scott Wheeler, cloud practice leader at Asperitas Consulting, an IT and cloud services firm. "They are not firewall products, but they do enable the concept of hybrid mesh firewall. So, depending on how you look at the semantics, they are more of a hybrid mesh firewall solution because you can manage across different firewall providers."


Why the cyber skills crisis is an opportunity to transform your cybersecurity

A strategic approach is needed for security leaders and their teams to address the resource crisis. A key response emerging in the market is security vendor consolidation. According to Gartner, 75% of organizations were pursuing consolidation in 2022, almost tripling since 2020. Considering that an alarming 35% of cyber budgets are being spent on tools that don’t give a measurable improvement in cybersecurity posture, it’s evident why businesses are seeking to consolidate and do more with less. However, there is a degree of caution around consolidating vendors and tools. Nearly four in five security leaders and decision-makers admitted to being concerned that consolidation will reduce their ability to mitigate cyber risk. But we found this skepticism to be unfounded. In reality, half of those who have begun consolidating have seen an improvement in security posture as a result. This is because, when approached strategically, consolidation streamlines security operations. 


Industrial modernization: Becoming future-ready in uncertain times

Future-ready companies have already embraced agile practices and distributed computing technologies like edge computing, containers, and microservices to optimize existing systems and drive innovation. IT modernization is the practice of updating older software and infrastructure to newer computing approaches, including languages, frameworks, architectures, and infrastructure platforms. It does not require wholesale replacement> if done well, modernization can extend the lifespan of an organization’s software and infrastructure while taking advantage of recent innovation. While the term legacy may have a negative connotation in technology, these systems are often the bedrock of a company’s business operations. Modern, cloud-native computing paradigms are distributed by nature. Modernization shifts the technology stack from a tightly coupled, hierarchical, siloed, and point-to-point structure to one that is application-driven, loosely coupled, software-defined, and integrated across all layers of the architecture.


Interrogate Your Software with AI — The Future for SREs

With AI-driven incident analysis, we gain the capability to process data rapidly and recognize correlations that otherwise might have been overlooked. This empowers us to take proactive measures and predict potential incidents using historical data, breaking free from the limitations of reactive maintenance. Moreover, AI-powered analysis can play a vital role in assisting SREs in determining the severity of incidents. By defining criteria for incident severity classification and relying on AI insights, we can make more informed decisions and prioritize response efforts efficiently. Resource allocation, a crucial aspect of SRE, can be guided by AI-generated statistics that paint a clear picture of an incident’s impact and resource requirements, enabling us to scale responses based on severity and complexity. Finally, we can’t forget about incident reports, documentation and runbooks. We all know how bad those can be. Depending on who triaged the incident, what’s reported and documented can range from a simple paragraph to pages of in-depth research and analysis. 



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - August 16, 2023

The looming battle over where generative AI systems will run

What is becoming more apparent is that the location where most generative AI systems will reside (public cloud platforms versus on-premises and edge-based platforms) is still being determined. Vellante’s article points out that AI systems are running neck-and-neck between on-premises and public cloud platforms. Driving this is the assumption that the public cloud comes with some risk, including IP leakage, or when better conclusions from your data appear at the competition. Also, enterprises still have a lot of data in traditional data centers or on edge computing rather than in the cloud. This can cause problems when the data is not easily moved to the cloud, with data silos being common within most enterprises today. AI systems need data to be of value, and thus it may make sense to host the AI systems closest to the data. I would argue that data should not exist in silos and that you’re enabling an existing problem. However, many enterprises may not have other, more pragmatic choices, given the cost of fixing such issues. 


Quantum Computing: Australia’s Next Great Tech Challenge & Opportunity

One of the big opportunities for Australia in this space will be its close relationship with the United States. Because of the sheer value of quantum computing research and technology across both military and civilian IP, nations tend to be more circumspect about sharing information in comparison to conventional technology. The downside to this is that it means the U.S. isn’t able to draw on the same global pool of talent that it’s used to. A shortage of talent isn’t such a major issue in regular computing fields because global talent tends to pool and openly share information. ... “As other nations push forward, Australia risks missing out on the potential economic benefits,” a report by the University of Sydney notes. “We could also lose talented workers to countries that are investing more in quantum research. “Projects like the ambitious attempt to build the world’s first complete quantum computer aim to provide local opportunities and funding alongside their top-line goals. Moreover, Australia has a responsibility to ensure quantum technologies are developed and used ethically, and their risks managed.”


Q&A: An Introduction to Streaming AI

Streaming AI is about continuously training ML models using real-time data, sometimes with human involvement. The incoming data streams from many sources are analyzed, combined with contextual information, and matched against features that carry condensed information and intelligence specific to the given problem. ML algorithms continually generate these features using the most current data available. On the other hand, as noted earlier, generative AI focuses on generating responses based on a “seed” and then a pattern for finding the next thing to tack on. This works to generate content that conforms to certain parameters the model has “learned.” It is bounded, but not in a way that the boundaries can be easily understood. Until the recent rise of LLMs, considerable effort was invested in making ML models explainable to humans. The question was: how does the model arrive at its result? The “I have no idea” response is hard for humans to accept. In the made-up legal case citations example, the LLM program generated a motion that argued a point, but when asked to explain or validate its path, it just made some stuff up.


CISO’s role in cyber insurance

Enter cyber insurance, a safety net that offers organisations a way to mitigate the financial impact of these cyber incidents. However, navigating the complex landscape of cyber insurance is no small feat. This is where the Chief Information Security Officer (CISO) comes into play. As the vanguard of an organisation’s cybersecurity efforts, the CISO not only ensures that digital fortresses are robust but also plays a pivotal role in the realm of cyber insurance. Their expertise and insights are instrumental in assessing risks, selecting the right coverage, and ensuring that the organisation gets the most out of its policy. In essence, the CISO bridges the gap between the technical world of cybersecurity and the financial realm of insurance, ensuring that businesses are both well protected and well insured. ... As the primary custodian of an organisation’s cybersecurity posture, the CISO is responsible for conducting a thorough risk assessment. This involves identifying potential vulnerabilities, assessing the potential impact of different types of cyber incidents, and estimating the financial costs associated with these incidents.


Bolstering Africa’s Cybersecurity

In recent weeks and months, we have seen opportunities arise, often provided by academia and government, to improve cyber education. However, some parts of Africa are still without decent levels of electricity. So, is the dream of cyber education for all unattainable? ... Despite this, Africa-based data security analysts point out that a dearth of qualified technicians coupled with a lack of investment in cybersecurity has been the direct contributor to a growth in the amount and scale of successful cyberattacks. In fact, according to research from IFC and Google, Africa’s e-economy is expected to reach $180 billion by 2025, but its lack of security support could halt that growth. Most of these campaigns are based upon spam or phishing efforts derived from information garnered from open source intelligence (OSINT), which is often more effective against a remote workforce that may be more exposed to attack techniques while outside of the technical and administrative controls of traditional office work.


Everything Can Change: The Co-Evolution of the CMO and the CISO

Organizations with an established partnership between the CISO and CMO tend to outperform their competitors. This collaboration allows for a cohesive approach to risk management and brand protection, resulting in increased customer trust and loyalty. Organizations that view the CISO purely as a technical operational leader often struggle with cybersecurity initiatives and fail to align security measures with business goals. This approach limits the potential for strategic contributions from the CISO in driving revenue growth and defending value. On the other hand, organizations that integrate the CISO into the go-to-market strategy leverage their expertise to address security concerns proactively, enhancing customer trust and differentiating themselves from competitors. By combining security practices with marketing efforts, these organizations can communicate their commitment to data protection and establish a competitive advantage in terms of trustworthiness. Effective CISOs have a seat at the executive table, allowing them to more directly align security initiatives with business outcomes. 


Machine unlearning: The critical art of teaching AI to forget

Machine unlearning is the process of erasing the influence specific datasets have had on an ML system. Most often, when a concern arises with a dataset, it’s a case of modifying or simply deleting the dataset. But in cases where the data has been used to train a model, things can get tricky. ML models are essentially black boxes. This means that it’s difficult to understand exactly how specific datasets impacted the model during training and even more difficult to undo the effects of a problematic dataset. OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data. Privacy concerns have also been raised after membership inference attacks have shown that it’s possible to infer whether specific data was used to train a model. This means that the models can potentially reveal information about the individuals whose data was used to train it.


Unit Tests Are Overrated: Rethinking Testing Strategies

Unit tests fare much more poorly with this metric than most people realize. The first problem is that they often don’t provide useful information about the actual state of the system under review. When unit tests are written as acceptance tests, they are often intricately coupled with the specific implementation. They will only fail if the implementation changes, not when changes break the system (e.g., verifying the value of a class constant). Using acceptance tests as regression tests must be done intentionally and thoughtfully, deleting everything that does not provide useful information about the system’s behavior. Another major problem with unit tests is that to test the inputs of one method, you often need to mock out the responses from other methods. When you do this, you are no longer testing the system you have, you are testing a system that you assumed you had in the past. The system can break and a unit test will not fail because it had an assumption that an input would be received that the real-world system no longer supplies. 


The vital role the CISO has to play in the boardroom

Cybersecurity risk management and information governance are complex and gritty subjects which can be hard to follow for the uninitiated. Boardrooms aren’t the place for the ins and outs of the issue at hand. Learning to communicate effectively is possibly the single most important skill for aspiring and ambitious CISOs. Throughout history, great leaders have demonstrated an excellent ability to communicate, bringing people on a journey with them and gathering support along the way. This is not about dumbing down or glossing over the important parts. Rather, it’s about honing a fundamental business skill: being able to make a compelling argument clearly and concisely. You need to be able to translate critical cybersecurity information into business objectives. Cybersecurity risk management is a regulated requirement. Board directors, officers and senior management can be held liable for the decisions they make around cybersecurity risks and incidents. Clear and effective communication is critical in supporting organisations to make the right decisions that could be later relied upon to protect its people.


3 strategies that can help stop ransomware before it becomes a crisis

Without an incident response plan in place, companies typically panic, not knowing who to call, or what to do, which can make paying the ransom seem like the easiest way out. With a plan in place, however, people know what to do and will ideally have practised the plan ahead of time to ensure disaster recovery measures work the way they're supposed to. ... Having multiple layers of defense, as well as setting up multifactor authentication and data encryption, are fundamental to cybersecurity, but many companies still get them wrong. Stone recently worked with an educational organization that had invested heavily in cybersecurity. When they were hit by ransomware, they were able to shift operations to an offline backup. Then the attackers escalated their demands -- if the organization didn’t pay the ransom, their data would be leaked online. “The organization was well prepared for an encryption event, but not prepared for the second ransom,” Stone says. “There was actual sensitive data that would trigger a number of regulatory compliance actions.”



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer

Daily Tech Digest - August 15, 2023

How to build employee trust as AI gains ground

Most experts agree, however, that newer AI tools are less about replacing people and more about eliminating mundane, manual, or number-crunching tasks that most employees already hate. In fact, the technology will mostly help free up workers to tackle more important tasks such as project management, data science research and, perhaps most importantly, creative thinking and problem solving. "There is no example today of an AI system that can perform data science totally independent of people," said Erick Brethenoux, a distinguished vice president analyst at research firm Gartner. A lot of the uncertainty and fear workers feel about generative AI tools is based on ignorance, experts say. AI, in its many forms, has been around for more than 50 years, but many people simply don’t recognize it’s been beside them all this time. “People have always been afraid of AI because the vision they have of it is science fiction; it’s a Hollywood vision of it,” Brethenoux said. “There’s a lot of hype around it."


Red Hat rivals form Open Enterprise Linux Association

At the heart of the new organization is a disagreement over the way Red Hat, long the dominant force in enterprise Linux, provides access to its source code. For years, the company supported the development of a Red Hat Enterprise Linux clone called CentOS, with the idea of providing a free alternative for testing and development purposes, given that paid support would be unnecessary for that purpose. However, increasingly, users began to implement CentOS instead of RHEL in production environments as well, with other companies, including CIQ, springing up to provide enterprise support. Accordingly, Red Hat stopped supporting CentOS in its previous form two years ago, in favor of an alternative called CentOS Stream. That, however, is an upstream distribution, meaning that it’s updated much more frequently, making it less suitable for production work. And earlier this summer, Red Hat made its source code less accessible, restricting access to paying Red Hat customers and obscuring some details of the way the code is put together to create the final distribution.


How FraudGPT presages the future of weaponized AI

FraudGPT signals the start of a new, more dangerous and democratized era of weaponized generative AI tools and apps. The current iteration doesn’t reflect the advanced tradecraft that nation-state attack teams and large-scale operations like the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, are creating and using. But what FraudGPT and the like lack in generative AI depth, they more than make up for in ability to train the next generation of attackers. With its subscription model, in months FraudGPT could have more users than the most advanced nation-state cyberattack armies, including the likes of Department 121, which alone has approximately 6,800 cyberwarriors, according to the New York Times — 1,700 hackers in seven different units and 5,100 technical support personnel. While FraudGPT may not pose as imminent a threat as the larger, more sophisticated nation-state groups, its accessibility to novice attackers will translate into an exponential increase in intrusion and breach attempts, starting with the softest targets, such as in education, healthcare and manufacturing.


Application Rationalization: Is Complexity Avoidable?

Removing the clutter from your application portfolio is its own reward. Simplifying your software means: easier maintenance; greater agility; lower training requirements; reduced costs; faster rationalization in future. This is, indeed, all possible to achieve. With unlimited budget, and a willingness to both make tough choices about stripping back applications and be strict with your colleagues, you could of course remove all complexity from your portfolio. The question remains, however: should you? Fully optimizing your application portfolio is costly, time-consuming, and will likely cause a lot of frustration for software users along the way. True application rationalization involves a balancing act between technical debt and optimization, meaning some complexity will likely need to be tolerated. If your team communicates via Slack, for example, it would be easier to remove email and Zoom licenses. However, if your external stakeholders don't use Slack Connect, you could cripple your company's ability to function by doing so.


How to take action against AI bias

With AI adoption increasing rapidly, it’s critical that guardrails and new processes be put in place. Such guidelines establish a process for developers, data scientists, and anyone else involved in the AI production process to avoid potential harm to businesses and their customers. One practice enterprises can introduce before releasing any AI-enabled service is the red team versus blue team exercise used in the security field. For AI, enterprises can pair a red team and a blue team to expose bias and correct it before bringing a product to market. It’s important to then make this process an ongoing effort to continue to work against the inclusion of bias in data and algorithms. Organizations should be committed to testing the data before deploying any model, and to testing the model after it is deployed. Data scientists must acknowledge that the scope of AI biases is vast and there can be unintended consequences, despite their best intentions. Therefore, they must become greater experts in their domain and understand their own limitations to help them become more responsible in their data and algorithm curation.


3 Ways Enterprise Architects Can Bridge the Socio-Technical Gap

Software architecture is often a series of trade-offs. However, for people not involved in the original decision, it is often no longer clear what the trade-off was or how that trade-off led to the decision. One approach to capturing these decisions is Architecture Decision Records (ADRs). Note that ADRs are not some kind of technical rule, they are essentially a document. But having such a document can be a useful communication device, as it creates a history that allows people to keep track of trade-offs made in the past. The code and architecture themselves can only communicate the current state, but not how that current state came to be. Note that recording decisions doesn’t make them permanent or immutable. ... Capturing the rationale behind architectural decisions through methods like Architecture Decision Records ensures a clear understanding of trade-offs made over time. Additionally, addressing architecture incrementally, akin to code-level refinements, offers a practical way to manage risk and avoid conflicting priorities.


Broken Promises of the Low-Code Approach

The reality is that many low-code solutions present a fundamental misunderstanding of software development: They conflate the challenge of understanding a programming language’s syntax with the challenge of designing effective application logic. Programming languages are just tools; their syntax is merely a means of expressing solutions. The true heart of software development lies in problem-solving, in crafting algorithms, data structures and interfaces that efficiently fulfill the application’s needs. By aiming to simplify software development through a graphical user interface (GUI), low-code solutions replace syntax without necessarily simplifying the fundamental challenge of designing robust applications. This approach can introduce multiple drawbacks while failing to alleviate the true complexities of software creation, ultimately having a negative impact on your team’s ability to deliver real value. ... Low-code solutions frequently grapple with limited customization, often failing to meet specific, complex or unique business requirements. The risk of vendor lock-in is another significant downside, potentially leaving users high and dry if there are changes in pricing, feature offerings or if the vendor closes shop.


Micro transformation: Driving big business benefit through quick IT wins

While it’s still early days to determine the success of the micro transformation, the initial customer feedback has been encouraging, Aird says. “There’s something intrinsically rewarding when you hear directly from customers about how much they’re enjoying the new tool, how it’s adding value to their purchasing experience, and how it makes the process of creating their own neon signs easier and more fun and exciting.” This is critical because Custom Neon operates in a “highly saturated e-commerce niche,’’ he adds, and micro transformations such as upgrading the website tool “subtly, but surely redefine the customer experience, contributing to our continued growth and competitiveness.” This kind of micro transformation underscores the power of agile methodology, enabling IT to identify bottlenecks, implement targeted improvements, and quickly see the effects, Aird says. “Moreover, they allow us to enhance our KPIs, notably in customer satisfaction and operational efficiency.”


Cybersecurity hiring gap: Time to rethink who can contribute

Ford sees the "cybersecurity talent shortage" as misidentified, he refers to the situation as an "experience shortage." As we all know, the only way to garner experience is by doing. He opened doors to "overlooked" talent, with the creation of their Cybersecurity Career Reboot Program. The program's key factor probably broke every HR sorting tool, as they sought out individuals who had been passed over because the "lack the experience required to land entry-level jobs." ... They then used their Professional Rotation Experience Program (PREP), which took recent grads and put them in "two-year rotational program that includes global exposure to all our cybersecurity functions. PREP participants gain experience with the foundations of cybersecurity through hands-on project work, exposure to a variety of experiences, and innovative training and development, rotating through the different teams within cybersecurity every six months during the program." While the focus of homegrown talent programs is on the new and eager employees, CISOs must also keep an eye on retaining and improving the talent already in place.


Generative AI – What Are the Legal Issues?

The pace of the development of AI far outstrips the legal, regulatory and ethical frameworks which need to be put in place to ensure that the benefits of AI are carefully considered. For anyone looking at adopting or developing AI technologies, risk assessments should be conducted to identify and mitigate the impact on individuals. ... Considering the dataset used to teach the algorithm will potentially identify areas of risk. For example, an AI designed to sift CVs and provide hiring recommendations might inherit any unconscious hiring biases from the underlying dataset of ‘successful applicant’ and ‘unsuccessful applicant’ CVs. Not all algorithms are born equal and consideration should be given to the sophistication and development of any product before use given the potential impact on individuals. ... As Gen AI can create new content, who will own the intellectual property in any new work, media, image or music? There may be IP issues if the Gen AI creator did not have sufficient rights to the information used in the training dataset and any contract should clearly set out IP ownership where possible.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - August 11, 2023

How to tell if your cloud finops program is working

A successful finops program should ensure compliance with applicable financial regulations and industry standards. These change across industries, but a few industries, such as finance and health, are more constrained by rules than others. A good finops program will help your company stay current with relevant laws, rules, and regulations, such as GAAP (generally accepted accounting principles) or IFRS (International Financial Reporting Standards). Regular audits and reviews should be conducted to ensure that financial processes and practices align with the required standards and laws. These are often overlooked by cloud engineers and cloud architects building and deploying cloud-based systems since most of them don’t have a clue about regulations and laws beyond the basics. If done well, finops should take the stress off those groups and automate much of what needs to be monitored regarding regulatory compliance. I was early money on finops, and for good reason. We need to understand the value of cloud computing right after deployment and monitor its value continuously. 


Why Data Science Teams Should Be Using Pair Programming

Based on what we learn about the data from EDA, we next try to summarize a pattern we’ve observed, which is useful in delivering value for the story at hand. In other words, we build or “train” a model that concisely and sufficiently represents a useful and valuable pattern observed in the data. Arguably, this part of the development cycle demands the most “science” from data scientists as we continuously design, analyze and redesign a series of scientific experiments. We iterate on a cycle of training and validating model prototypes and make a selection as to which one to publish or deploy for consumption. Pairing is essential to facilitating lean and productive experimentation in model training and validation. With so many options of model forms and algorithms available, balancing simplicity and sufficiency is necessary to shorten development cycles, increase feedback loops and mitigate overall risk in the product team. As a data scientist, I sometimes need to resist the urge to use a sophisticated, stuffy algorithm when a simpler model fits the bill.


Should IT Reinvent Technical Support for IoT?

A first step is to advocate for IoT technology purchasing standards and to gain the support of upper management. The goal should be for the company to not purchase any IoT technology that fails to meet the company’s security, reliability, and interoperability standards, which IT must define. None of this can happen, of course, unless upper management supports it, so educating upper management on the risks of non-compliant IoT, a job likely to fall to the CIO, is the first thing that should be done. Next, IT should create a “no exceptions” policy for IoT deployment that is rigorously followed by IT personnel. This policy will make it a corporate security requirement to set all IoT equipment to enterprise security standards before any IoT gets deployed. Finally, IT needs a way to stretch its support and service capabilities at the edge without hiring more support personnel, since budgets are tight. If something goes wrong at your manufacturing plant in Detroit while technical issues arise at your San Diego, Atlanta, and Singapore facilities, it will be a challenge to resolve all issues simultaneously with equal force.


Why AI Forces Data Management to Up Its Game

With so much storage growth, organizations never reach the point where storage is no longer a constant challenge. The combination of massive capacity growth and democratized AI make it imperative to implement effective data management from the edge to the cloud. A strong foundation for artificial intelligence necessitates well-organized data stores and workflows. Many current AI projects are faltering due to a lack of data availability and poor Data Management. Skilled Data Management, then, has become a key factor in truly realizing the potential of AI. But it also plays a vital role in containing storage costs, hardening data security and cyber resiliency, verifying legal compliance and enhancing customer experiences, decision-making, and even brand reputation. ... Using metadata and global namespaces, the Data Management layer makes data accessible, searchable, and retrievable on whatever storage platform or media it may reside. It adds automation to facilitate tiering of data to long-term storage as well as cleansing data and alerting on anomalous conditions.


Hybrid work is entering the 'trough of disillusionment'

Even though remote and hybrid work practices are in the trough now, that doesn’t mean they’ll stay there. Some early adopters eventually overcome the initial hurdles and begin to see the benefits of innovation and best practices emerge. Until then, the return-to-office edicts continue to roll out. ... Even with an uptick in return-to-office mandates, office building occupancy continues to remain below pre-pandemic levels. The average weekly occupancy rate for 10 metropolitan areas in the United States this week was below 50% (48.6%), according to data tracked by workplace data company Kastle Systems. That occupancy rate is actually down 0.6% from last week. Office occupancy rates change substantially, depending on the day of the week. Tuesdays, Wednesdays and Thursday are the most popular in-office days. Globally and in the US, organizations have moved from ad hoc hybrid work policies, where employees could pick their days in the office, to structured schedules.


Cisco: Hybrid work needs to get better

While organisations in APAC have been progressive in adopting hybrid work arrangements, Patel cautioned them against making the mistake of mandating that employees work in the office all the time. “It’s much better to create a magnet than a mandate,” he said. “Give people a reason to come back to the office because when they collaborate in the office, there’s going to be this X factor that they don’t get when they are 100% remote.” Patel said adopting hybrid work would also help organisations recruit the best talent from anywhere in the world, enabling more people to participate equally in a global economy. “The opportunity is very unevenly distributed right now, but human potential is pretty evenly distributed, so it would be nice if anyone in a village in Bangladesh can have the same economic opportunity as someone in Silicon Valley. “Most of the time, the mindset is that you are distance-bound, so if you don’t happen to be in the same geography, then you don’t have access to opportunity. That’s a very archaic way of thinking and we need to think about this in a much more progressive manner,” he said.


Rethinking data analytics as a digital-first driver at Dow

The first step in this journey involved bringing our D&A teams under one roof in the first half of 2022. This team eventually became Enterprise D&A, with team members based around the world. To develop the strategy, we held discussions with external partners and interviewed Dow leaders to identify trends important to business success. Then we looked at where those trends align with key focus areas like customer engagement, accelerating innovation, market growth, reliability, sustainability, and the employee experience. Our central task was to translate our findings into a strategy that creates the most value for our stakeholders: our customers, our employees, our shareholders, and our communities. We determined we needed to move to a hub-and-spoke model. To make this work and achieve our vision of transforming data into a competitive advantage, we would need to build a strong culture of collaboration around D&A and support it with talent development within our organization and across the company.


Why data isn’t the answer to everything

What happens when you disagree with the AI? What are you then going to go and do? If you’re always going to disagree with it and do what you wanted to do anyway, then why bother bringing the AI in? Have you maybe mis-written your requirements and what that AI system is going to go and do for you? A lot of this is the foundational strategy on organisational design, people design, decision making. As an executive leader, it’s really easy to stand up on stage and say, ‘Here’s our 2050 vision or our 2030 vision.’ At the end of the day, an executive doesn’t do much, they just create the environment for things to happen. It’s frontline staff that make decisions. There are two reasons why you wouldn’t make a decision: you don’t have the right data and context or you don’t have the authority to make that decision. Typically, you only escalate a decision when you don’t have the data and context. It’s your manager that has more data and context, which enables that authority. So, with more data and context, I can push more authority and autonomy down to the frontline to actually go and drive transformation. 


Whirlpool malware rips open old Barracuda wounds

The vulnerability, according to a CISA alert, was used to plant malware payloads of Seapsy and Whirlpool backdoors on the compromised devices. While Seapsy is a known, persistent, and passive Barracuda offender masquerading as a legitimate Barracuda service "BarracudaMailService" that allows the threat actors to execute arbitrary commands on the ESG appliance, Whirlpool backdooring is a new offensive used by attackers who established a Transport Layer Security (TLS) reverse shell to the Command-and-Control (C2) server. "CISA obtained four malware samples -- including Seapsy and Whirlpool backdoors," the CISA alert said. "The device was compromised by threat actors exploiting the Barracuda ESG vulnerability." ... Whirlpool was identified as a 32-bit executable and linkable format (ELF) that takes two arguments (C2 IP and port number) from a module to establish a Transport Layer Security (TLS) reverse shell. A TLC reverse shell is a method used in cyberattacks to establish a secure communication channel between a compromised system and an attacker-controlled server.


How digital content security stays resilient amid evolving threats

AI technology advancements and the great opportunities it provides have also motivated business leaders and consumers to reassess the underlying trust models that have made the internet work for the past 40 years: every major advance in computing tech has stimulated sympathetic updates in the computer security industry, and this recent decisive move into a world powered by data, and auto-generated data, is no different. Provenance will become a key component in determining the trustworthiness of data. The changes though extend beyond technology. Rather than continuing to use systems that were built to assume trust and then verify, businesses and consumers will change and use verify then trust systems which will also bring mutual accountability into all processes where data is shared. Standards, open APIs and open-source software have proven to be adaptable to changing technology previously and will continue prove adaptable in the age of AI and significantly higher volumes of digital content.



Quote for the day:

"He who wishes to be obeyed must know how to command" -- Niccol_ Machiavelli

Daily Tech Digest - August 10, 2023

AMD's Zen architecture: The fundamentals of these Zen 4 CPUs

While the computing industry, CPU enthusiasts, and even AMD itself expected the road to performance leadership to be long, it was actually quite short. Zen 2, the successor to Zen, launched in 2019 and shocked pretty much everyone by blowing Intel out of the water. AMD racked up a massive lead in multi-threaded performance in pretty much every segment, had significantly better power efficiency in virtually every workload, and even surpassed Intel in single-threaded performance, which AMD hadn't been able to do for over a decade. From here, the road just got easier for AMD. The server market was (and still is) the most important area for AMD to make progress in, and by the time Zen 3 came out in 2020, AMD controlled 7% of the market, up from nearly 0% before Zen came out. This was made all the easier thanks to how Intel absolutely screwed up its plans to launch powerful 10nm CPUs, leaving AMD to face off against outdated and practically obsolete 14nm chips, which are some of the worst Intel has ever made.


Embracing the ‘Pedagogy of Error’ in Cybersecurity Education

The lesson I am always reminded of is that “we must abandon certainties in order to build from the challenge of uncertainty.” The deeper we delve into global instabilities and their challenges, the better perspectives and questions we can ask ourselves. It would be very sad to know that everything has been solved. Therefore, when we challenge current knowledge and explore different alternatives, we are opening up the possibility of seeing beyond what is known and, therefore, introducing something different. ... The academy must maintain and motivate curiosity, expectations, challenges and adventures that arise when uncertainty manifests itself from the inevitability of failure. In this sense, motivate the pedagogy of “error.” That is, understanding the “error” as part of the process and not as a result is what makes it possible to create cybersecurity and IT professionals open to constantly learn, to let themselves be questioned in their previous knowledge and to maintain a proactive stance in the face of adversaries’ challenges.


The dark side of the cloud: How cloud is becoming prey to sophisticated forms of cyber attack

As businesses increasingly adopt cloud-based solutions, cyber criminals—who are constantly looking for new vulnerabilities to exploit—are finding it easier to engineer data breaches, explains Rajesh Garg, EVP, Chief Digital Officer & Head of Applications & Cybersecurity at data centre service provider Yotta Data Services. Around 98 per cent of organisations globally now utilise some form of cloud-based tech, while many have adopted multi-cloud deployments from multiple cloud service providers. The massive adoption of the cloud environment has also given rise to Shadow IT, where employees or departments use hardware or software from external sources without the knowledge of the IT or security group of the organisation. This creates a vacuum, where the responsibility of managing security within organisations is not clearly defined. “Cloud infrastructure is inherently complex; that increases manifold with the addition of hybrid and multiple-cloud models,” says Atul Gupta


Google Cloud launches Chronicle CyberShield to help government agencies tackle threats

A primary component of Chronicle CyberShield is establishing a modern government security operations center (SOC), comprising a network of interconnected SOCs to scale and aggregate security threats, Google Cloud said in a press release. Chronicle CyberShield enables governments to leverage cyber threat intelligence from Google and Mandiant, now part of Google Cloud, to build a scalable and centralized threat intelligence and analysis capability, according to the firm. This is integrated operationally into the government SOC to identify suspicious indicators and enrich the context for known vulnerabilities. The solution also allows governments to build a coordinated monitoring capability with Chronicle SIEM to simplify threat detection, investigation, and hunting with the intelligence, speed, and scale of Google. By implementing Chronicle across a network of SOCs, attack patterns and correlated threat activity across multiple entities are available for investigation and analysis. 


International implications of hack-for-hire services

A lack of consequences for hackers that contract themselves out to foreign clients has only encouraged the hack-for-hire industry in India. US prosecutors indicted Sumit Gupta, the Director of Indian hacking firm BellTroX in 2015 for hacking on behalf of two American lawyers, yet the Indian government never took action against him. After he failed to be convicted in 2015, BellTroX went on to commit the Dark Basin hacks in 2020. BellTroX also surfaced as part of a criminal case against an Israeli private detective who hired Indian hacking firms on behalf of unnamed clients in Israel, Europe, and the US. The private detective pleaded guilty in 2022, but the hackers in India have yet to face any legal consequences. BellTroX also surfaced as part of a criminal case against an Israeli private detective who hired Indian hacking firms on behalf of unnamed clients in Israel, Europe, and the US. This lack of enforcement is not because India does not have the legal infrastructure to prosecute cybercrimes; the Information Technology Act of 2000, and its subsequent amendments in 2008 


Windows Defender-Pretender Attack Dismantles Flagship Microsoft EDR

In studying the Windows Defender update process, Bar and Attias discovered that signature updates are typically contained in a single executable file called the Microsoft Protection Antimalware Front End (MPAM-FE[.]exe). The MPAM file in turn contained two executables and four additional Virtual Device Metadata (VDM) files with malware signatures in compressed — but not encrypted — form. The VDM files worked in tandem to push signature updates to Defender. The researchers discovered that two of the VDM files were large sized "Base" files that contained some 2.5 million malware signatures, while the other two were smaller-sized, but more complex, "Delta" files. They determined the Base file was the main file that Defender checked for malware signatures during the update process, while the smaller Delta file defined the changes that needed to be made to the Base file. Initially, Bar and Attias attempted to see if they could hijack the Defender update process by replacing one of the executables in the MPAM file with a file of their own. 


Securing The Future: Embracing Cloud-Centric Cybersecurity Strategies

Upskilling an entire cybersecurity organization is a significant undertaking that requires planning, time, funding and—most importantly—leadership buy-in. CISOs won't be able to snap their fingers and transform their teams into the cloud-literate leaders of tomorrow. After all, it could take up to six months of training just to have an intelligent-sounding conversation about the cloud—least of all, be productive. Fortunately, much of the educational infrastructure necessary for upskilling workforces is available. Cloud service providers AWS, Microsoft Azure and Google Cloud each have a portfolio of cloud computing certifications. Platforms such as A Cloud Guru and Cloud Academy offer multi-cloud training. Security-focused cloud training and certifications are available from organizations such as the SANS Institute, (ISC)2 and the Cloud Security Alliance. ... These senior leaders are generally no longer "hands on keyboard" professionals. They lead programs, set priorities and assign goals. Of course, they need to be conversant with the technology their organization uses. 


Northern Ireland Police at Risk After Serious Data Breach

"This is the most serious breach I have ever seen, due to the potential it could lead to the death or injury of those whose data has been disclosed," said Brian Honan, who heads Dublin-based cybersecurity firm BH Consulting. Exposed information could be abused not only by criminals, including for revenge, but also by republican paramilitaries who continue to target police officers and employees. The most recent attack occurred in February, when off-duty senior detective John Caldwell was shot in a sports complex in Omagh. He survived with "life-changing" injuries, said the chairman of Northern Ireland's Police Federation. Authorities arrested 11 people and charged three with being members of a proscribed terrorist group - in this case, the New IRA, a splinter of the Provisional Irish Republican Army that rejects a final 1997 terrorism cease-fire that helped lead to the 1998 Good Friday Agreement. The PSNI says it is working to "to identify any security issues" posed by the breach as quickly as possible, and it has notified the Information Commissioner's Office.


Ethics as a process of reflection and deliberation

You can integrate ethics into your projects by organising a process of ethical reflection and deliberation. You can organise a three-step process for that:Put the issues or risks on the table – things that you are concerned about, things that might go wrong. Organise conversations to look at those issues or risks from different angles – you can do this in your project team, but also with people from outside your organisation. Make decisions, preferably in an iterative manner – you take measures, try them out, evaluate outcomes, and adjust accordingly. A key benefit of such a process is that you can be accountable; you have looked at issues, discussed them with various people, and have taken measures. Practically, you can organise such a process in a relatively lightweight manner, e.g., a two-hour workshop with your project team. Or you can integrate ethical reflection and deliberation in your project, e.g., as a recurring agenda item in your monthly project meetings, and involve various outside experts on a regular basis.


6 legal ‘gotchas’ that could sink your CIO career

You might be thinking that your company will defend you for liability, and you might be right if your company has liability coverage for its officers, and you are an officer. But does your company have liability insurance for its executives? It’s standard for most Fortune 500 companies to have liability insurance for their executives, but a substantial number of private and not-for-profit companies are facing challenges in rising premiums and may not have liability protection. If you’re interviewing for a CIO job, it’s prudent to find out whether the company you’re interviewing with offers liability protection and indemnification insurance for its executives. ... When CIOs are sued or fired, it’s often because of a significant cybersecurity breach. The reason for this is because CIOs are ultimately responsible for safeguarding corporate information. When a breach occurs, it is always perceived as being on the CIO’s watch, and the repercussions can be severe. 



Quote for the day:

"We learn by example and by direct experience because there are real limits to the adequacy of verbal instruction." -- Malcolm Gladwell

Daily Tech Digest - August 09, 2023

You can’t run away from technical debt

It could be poor architecture because IT leaders picked the less efficient path to a solution. Perhaps they went with a specific vendor, even a cloud provider, for the wrong reasons, such as a preexisting relationship. This led to a solution that functions but adds instead of removes technical debt. I’ve heard the excuses: A decision was made to expedite solution delivery for an urgent business purpose. However, that’s almost never the case. Most of the time technical debt accumulates from misguided decisions; the company could have gone in a direction that did not create technical debt but did not. Indeed, many of the better solutions would have cost less money and taken less time to deploy. In other words, most of the technical debt is a collection of self-inflicted wounds, usually caused by leaders who don’t bother to understand the bigger picture and take technological shots in the dark. Of course, “it works,” but it significantly increases technical debt. I’ve second-guessed a great many of these in my 40-year career.


Australia’s Banking Industry Mulls Better Cross-Collaboration to Defeat Scam Epidemic

The Australian banking sector, for its part, has already been looking for ways to work together to combat fraud. In May, 17 banks announced that, thanks to a collaboration between them, they had been able to halve the time it takes to identify and block payments to scam operators. This effort is powered by the ABA’s Fraud Reporting Exchange. This initiative cross-matches data between participating banks and allows for nearly real-time communication of fraudulent transactions across the network. Other government initiatives, meanwhile, include the new National Anti-Scams Centre, which went live on July 1. This organization will enable faster sharing of information, so police and regulators can act on scams more quickly. There will also be an Australian Sender SMS ID registry that will provide a “whitelist” of phone numbers that can be used to block scam calls and SMS messages that supposedly come from government agencies.


6 ways CIOs sabotage their IT consultant’s success

Here’s a promise made during negotiations that’s often DOA once the project starts: The client will provide the consultant with the information necessary for the project to move forward. Of course, once the project starts, it turns out that nobody in the client organization can provide that information. Why would the client make a promise like this? One reason: Whoever in the client organization is responsible for providing the information isn’t willing to admit that they can’t, either to their boss or to the consultants. In the short term it’s safer to make the promise and kick the can down the road, until the project has been going on long enough to shift the blame to those damned consultants who keep on making unrealistic requests of IT staff who are already overworked and underpaid. (Take a deep breath.) There’s another reason some clients can’t deliver information on demand: They’ve outsourced the IT functional area responsible for the information needed, and the outsourcer isn’t willing to help out consultants they see as likely competitors.


Technical vs. Adaptive Leadership

While technical leadership is essential, it does come with limitations. Relying solely on technical prowess can lead to a narrow focus, overlooking broader organizational dynamics and human factors. Additionally, in an ever-changing environment, technical skills can become outdated, necessitating a constant commitment to learning and adapting. Adaptive leadership, on the other hand, revolves around the ability to navigate uncertainty, ambiguity, and change. It is a leadership approach that focuses on guiding teams and organizations through transformational periods. Adaptive leaders are skilled at fostering resilience, encouraging creative problem-solving, and inspiring a culture of continuous learning. Adaptive leaders excel in communication and emotional intelligence. They possess the capacity to connect with their teams on a deeper level, empathizing with their challenges and aspirations. This ability to understand and relate to individuals creates an environment of trust, openness, and collaboration. 


Why big tech shouldn’t dictate AI regulation

Formed initially of Anthropic, Google, Microsoft, and OpenAI, the Forum is presented as an industry body which will ensure the ‘safe and responsible development of frontier AI models’. While not defined by the Forum’s initial press release, ‘frontier AI models’ can be understood to be general-purpose AI models which, in the words of the Ada Lovelace Institute, ‘have newer or better capabilities’ than other models. The forum’s objectives include undertaking AI safety research; disseminating best practices to developers; and collaborating with parties like academics, policymakers, and civil society bodies to influence the design and implementation of AI ‘guardrails’. Membership, meanwhile, will be restricted to organisations which (in the Forum’s eyes) both develop frontier models, and are committed to improving their safety. Admittedly, questions around the safe and effective development of AI will not arrive without investment, so it is encouraging to see a commitment to this collaborative approach amongst prominent AI vendors. Likewise, effective AI regulation will rely on input from those with real domain expertise: the industry’s doors must remain open to governments and policymakers.


Introduction to Apache Arrow

Apache Arrow is a framework for defining in-memory columnar data that every processing engine can use. It aims to be the language-agnostic standard for columnar memory representation to facilitate interoperability. Several open source leaders from companies also working on Impala, Spark and Calcite developed it. Among the co-creators is Wes McKinney, creator of Pandas, a popular Python library used for data analysis. He wanted to make Pandas interoperable with data processing systems, a problem that Arrow solves. ... Another benefit of Apache Arrow is its integration with Apache Arrow Flight SQL. Having an efficient in-memory data representation is important for reducing memory requirements and CPU and GPU load. However, without the ability to transfer this data across networked services efficiently, Apache Arrow wouldn’t be that appealing. Luckily Apache Arrow Flight SQL solves this problem. Apache Arrow Flight SQL is a “new client-server protocol developed by the Apache Arrow community for interacting with SQL databases that makes use of the Arrow in-memory columnar format and the Flight RPC framework.”


How to develop an intrapreneurial culture

A company that wants to inspire intrapreneurship needs to have the ability to mobilize resources across the organization to support the opportunities it surfaces, which can carry execution and reputational risks. But because of the substantial potential upsides, encouraging intrapreneurship should be central to an organization’s mission. Take the example of the Happy Meal, which has been pivotal to the growth of McDonald’s: the idea came from a maverick internal team. The Sony PlayStation became the first gaming console to ship over 100 million units—though it required internal champions to pick up the pieces from a failed external partnership. Southwest Airlines’ humorous safety announcements—pioneered by the airline’s founder as an integral part of the business model—have enhanced its customer experience and business. When intrapreneurship is encouraged, there’s evidence that people enjoy greater autonomy and a stronger connection to the organization’s purpose; not surprisingly, this leads to higher productivity and engagement. What does it take to develop more of this culture, and then to apply it? It’s not an exact science, but there are ways to give your intrapreneurs a leg up.


How Emotional Connections Can Drive Change: Applying Fearless Change Patterns

The Fear Less pattern suggests that you can appreciate their opposition. Ask for Help from the skeptic because they see the innovation in a different way than you do - therefore, they may be able to provide useful information you haven’t considered. You will learn from them and, in the process, they may begin to shift from the act of resisting to rethinking. You may not be able to convince them and trying to do this will likely take more time than you have. But you can seek the places where you agree and, perhaps, create some unique ideas that begin with those points of agreement. Most importantly, when you ask for their thoughts on the upcoming change, they will begin to become involved in the initiative, rather than simply complaining on the sidelines. They will recognize you care about what they can contribute and, as one of our Fearless Change readers pointed out, it doesn’t make it as much fun for them to complain. You may even want to seek out some skeptics to become a Champion Skeptic, taking on the official role of pointing out flaws and challenges at strategic points throughout the change initiative.


India Data Protection Bill Approved, Despite Privacy Concerns

The bill specifically states that the data fiduciary shall give the data principal the option to access such request for consent in English or any language specified in the Eighth Schedule to the Constitution of India. That final part has proved to be a tricky point though, as a PwC insight called this a "much-debated mandatory localization" as the central government may notify such countries or territories outside India to which a data fiduciary may transfer personal data. Cavey says the concerns about the bill are that this draft is more relaxed than the previous draft, and that fiduciaries will have more power over the data principals. "Less protection means that detection and investigation will be harder for the regulatory body," he says. The bill also states that the central government holds the authority to select the members of the Personal Data Protection Board, thus compromising its independence. Cavey says this is a main concern about how the Data Protection Board operates, how independent it will be, and how it will work in conjunction with the government.


Using creative recruitment strategies to tackle the cybersecurity skills shortage

Traditionally, there’s been an assumption that to begin a career in cybersecurity, you must have a specialized education and resume. However, the expanding threat landscape has forced the industry to reconsider what makes great talent. This includes emphasizing soft skills and varied backgrounds above all else, especially when it comes to combating the next big threat. Internships and apprenticeships can then offer the additional training needed to build a successful cybersecurity career. Education should also be continuous in the cybersecurity field, so organizations must ensure they are making an active effort to train the next generation of the workforce. This consists of supporting their current employees and also encouraging their path to learn in the best way possible. External and internal internships and apprenticeships are key to achieving this. They not only create more awareness around what it actually takes to have a job in cybersecurity but also help those within and outside of organizations develop the necessary skills to meet the needs of the evolving threat landscape.



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe