Daily Tech Digest - July 05, 2024

AI washing: Silicon Valley’s big new lie

The cumulative effect of AI washing is that it leads both the public and the technology industry astray. It fuels the delusion that AI can do things it cannot do. It makes people think AI is some kind of all-purpose solution to every problem — or a slippery slope into dystopia, depending on one’s worldview. AI washing incentivizes inferior solutions, focusing on “magic” rather than quality. Claims that your dog-washing hose is “powered by AI” doesn’t mean you end up with a cleaner dog. It just means you have an overpriced hose. AI washing warps funding. Silicon Valley investment nowadays is totally captured by both actual AI and AI-washing solutions. Even savvy investors may overlook AI-washing exaggeration and lies knowing that the AI story will sell in the marketplace thanks to buyer naiveté. The biggest problem, however, is not delusional selling by the industry, but self-delusion. Purveyors of AI solutions believe that human help is a badge of shame, when in fact I think human involvement would be received with relief. People actually want humans involved in their shopping and driving experience.


Healing cyber wounds in global healthcare

Since AI technology has advanced and medical device security lags, the ease of attack and the potential reward for doing so have made healthcare institutions too tempting to ignore. The Office of Civil Rights (OCR) at Health and Human Services (HHS) is investigating the Change Healthcare attack to understand how it happened. The investigation will address whether Change Healthcare followed HIPAA rules. ... The financial impact of cyberattacks on healthcare providers can be devastating. The Change Healthcare breach led to significant cash flow disruptions, with providers losing millions daily. In response to this crisis, industry leaders and political figures have called for federal funding to support healthcare providers and ensure the continuity of essential services. The Senate majority leader and the American Hospital Association (AHA) have urged the federal government to provide financial assistance to mitigate the impact of the cyberattack, including accelerated and advanced payments to hospitals, pharmacies, and other affected entities. This federal funding can help healthcare providers adopt advanced security measures and recover from the financial impact of cyberattacks. 


The next 10 years for cloud computing

The anticipated productivity gains and cost savings have not materialized, for the most part. The promised efficiencies did not translate into significant improvements in operational productivity for many organizations, and cloud platforms cost at least twice as much as traditional systems. The sharp decline in the costs of on-premises computing and storage servers during the past decade exacerbated the situation for public cloud providers. This threw a monkey wrench into the savings that the cloud promised over traditional on-premises systems. ... Cloud providers are now faced with “cloud exit” issues while focusing on AI growth. Their market continues to stagnate as enterprises find that a mix of on-premises and cloud platforms is perhaps more cost-effective, considering the operational costs of AI. In other words, AI is delaying the reality they would otherwise likely face in the short term. ... The days of enterprises buying cloud systems in haste left too many to repent at leisure. Vendors must better understand what enterprises should pay to find value and thus reduce the exodus to colocation providers, managed service providers, and enterprise data centers.


State of play: cloud in financial services

Banks are fully aware of the need for digital transformation and shifting legacy applications to the cloud in order to remain competitive, but enacting it across the entire banking value chain in a unified manner is not a simple task. Omdia’s 2024 IT Enterprise Insights Survey, for instance, shows that most retail banks have made some inroads into digital transformation, with respondents most likely to have made progress in adopting cloud services, but just 29% state that they have made significant progress. Many banks have taken a phased approach to digital transformation, often working with multiple product vendors. But there is a growing recognition that this approach brings its own challenges in terms of managing numerous vendors and roadmaps. ... Modernising the core banking system can be costly, time-consuming, and complex. However, anecdotal evidence suggests that banks are spending 85% on maintaining their existing core banking tech and the remainder on launching new products, which can be flipped once they have fully modernised their core, providing them with enormous scope to innovate.


What is dark fiber and is it right for your business?

The type of dark fiber available varies between locations. So-called metro dark fiber, typically found in built-up urban areas, tends to comprise larger cables with a higher fiber count, which means they offer more flexibility and different types of connection, such as point-to-multipoint, where a cable can service multiple destinations. Long-haul dark fiber, in contrast, is often constructed using single-mode fiber which has a smaller glass core, and as such is likely to only offer more simple, point-to-point, connections. However, there are no significant distance limitations on dark fiber, meaning it can be used to connect sites in locations many miles apart. Dark fiber can be an alluring solution for businesses with rapidly evolving or highly variable networking needs. Users can choose when and how to scale up bandwidth to meet the demands of their organization without having to wait for their ISP to carry out this process. It also avoids the limitations of a contract with an ISP, which will likely dictate the available data transfer rates and impose fees for network upgrades.


Examining the Risks of IT Hero Culture

In an IT hero culture, individual accomplishments are celebrated over teamwork, with a high value placed on swift responses and constant availability. This type of workplace includes a small group of individuals who bear a disproportionate responsibility for critical tasks and decision making. Typically, this culture appears in organizations lacking formal processes, requiring these so-called heroes to work extensive hours to maintain operations. ... IT hero culture —despite its immediate benefits—often proves to be a short-term solution with significant long-term drawbacks. When these indispensable individuals are absent, organizations face bottlenecks and inefficiencies. Transitioning to a process-driven culture enhances organizational effectiveness and efficiency, addressing these challenges. This transition, usually prompted by external stakeholders such as bankers, shareholders, and customers, as well as internal forces such as the board and senior management, moves away from overreliance on individual heroics to a more sustainable, team-oriented approach.


Will the cost of scaling infrastructure limit AI’s potential?

AI scaling, much like any other type of technology scaling is dependent on infrastructure. “You can’t do anything else unless you go up from the infrastructure stack,” Paul Roberts, director of Strategic Account at AWS, told VentureBeat. Roberts noted that there was a big explosion of gen AI that got started in late 2022 when ChatGPT first went public. While in 2022 it might not have been clear where the technology was headed, he said that in 2024 AWS has its hands around the problem very well. AWS in particular has invested significantly in infrastructure, partnerships and development to help enable and support AI at scale. ... The resources required to train increasingly bigger LLMs isn’t the only issue. Bresniker noted that after an LLM is created, the inference is continuously run on them and when that is running 24 hours a day, 7 days a week, the energy consumption is massive “What’s going to kill the polar bears is inference,” Bresniker said. ... According to Bresniker, one potential way to improve AI scaling is to include deductive reasoning capabilities, in addition to the current focus on inductive reasoning.


Smashing Silos With a Vulnerability Operations Center (VOC)

The responsibility for VM typically sits within the security operations center (SOC). The SOC is, after all, the frontline defense against cyberthreats, equipped with the tools, resources and processes to identify and mitigate vulnerabilities. Yet this strategy comes with its pitfalls, as SOC teams are already navigating a variety of responsibilities, from managing active threats to threat hunting. Enter VOC, offering an approach that complements the SOC by prioritizing prevention rather than just responding to incidents. This collaboration between the two means that if the VOC discovers a log4j vulnerability, for instance, the SOC team will be promptly notified. Then, the response team can swoop in if prevention fails. A VOC lets organizations manage vulnerabilities strategically and coherently, which ensures that the most serious threats are handled systematically. This specialized entity within an organization focuses on identifying, assessing and mitigating vulnerabilities in IT systems and networks. It acts as a central hub for vulnerability management, leveraging advanced tools and processes to continuously monitor for security weaknesses and coordinate response strategies.


Software Engineering, Startup Thinking

The challenge for organizations trying to adopt a more agile approach is that there are often simply too many silos, not enough skilled people, and a saturated technology market with too many tools. “Turning around a culture like this that prohibits scale is time-consuming and takes on average, three years to achieve,” he says. Given that the end goal of developing a more agile approach is to generate untrammeled innovation across an organization, getting the culture right is critical. ... Brial says he recommends fostering an environment where cross-functional teams bring together individuals from different departments like development, operations and security, to work collaboratively toward a common goal. This requires cross-training, where team members can gather knowledge and skills in areas beyond their core expertise. Developers learn about infrastructure and operations, while operations engineers gain insights into software development practices. “This cross-pollination of skills builds an understanding and sense of empathy between teams,” he says. Brial says every layer of an IT department should be moving toward “everything” as code, noting provisioning and deploying any type of software is costly, time-consuming and complex.


Logic bombs explained: Definition, examples, prevention

A logic bomb is a set of instructions embedded in a software system that, if specified conditions are met, triggers a malicious payload to take actions against the operating system, application, or network. The actual code that does the dirty work, sometimes referred to as slag code, might be a standalone application or be hidden within a larger program. ... The actual behavior of a logic bomb can range widely. When it comes to the insider threats that make up much of the logic bomb landscape, a few types of attack are particularly common, including file or hard drive deletions, either as a ransom threat or act of revenge, or data exfiltration, as part of a plan to use privileged information in future employment. ... The best way to sniff out malicious code that’s being embedded in your own software, either deliberately by a disgruntled employee or inadvertently in the form of a third-party library, is to bake secure coding practices, like those that are part of the DevSecOps philosophy, into your development pipeline. These practices are meant to ensure that any code passes security tests before it’s put into production, and would prevent a lone wolf insider attacker from unilaterally changing code in an insecure way.



Quote for the day:

"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano

Daily Tech Digest - July 04, 2024

Understanding collective defense as a route to better cybersecurity

Organizations invoking collective defense to protect their IT and data assets will usually focus on sharing threat intelligence and coordinating threat response actions to counter malicious threat actors. Success depends on defining and implementing a collaborative cybersecurity strategy where organizations, both internally and externally, work together across industries to defend against targeted cyber threats. ... Putting this into practice requires organizations to commit to coordinating their cybersecurity strategies to identify, mitigate and recover from threats and breaches. This should begin with a process that defines the stakeholders who will participate in the collective defense initiative. These can include anything from private companies and government agencies to non-profits and Information Sharing and Analysis Centers (ISACs), among others. The approach will only work if it is based on mutual trust, so there is an important role for the use of mechanisms such as non-disclosure agreements, clearly defined roles and responsibilities and a commitment to operational transparency. 


Meaningful Ways to Reward Your IT Team and Its Achievements

With technology rapidly advancing, it's more important than ever to invest in personalized IT team skill development and employee well-being programs, which are a win-win for employees and the companies they work for, says Carrie Rasmussen, CIO at human resources software provider Dayforce, in an email interview. ... Synchronize rewards to project workflows, Felker recommends. If it's a particularly difficult time for the team -- tight deadlines, major changes, and other pressing issues -- he suggests scheduling rewards prior to the work's completion to boost motivation. "Having the team get a boost mid-stream on a project is likely to create an additional reservoir of mental energy they can draw from as the project continues," Felker says. ... It's also important to celebrate success whenever possible and to acknowledge that the outcome was the direct result of great teamwork. "Five minutes of recognition from the CEO in a company update or other forum motivates not only the IT team but the rest of the organization to strive for recognition," Nguyen says. He also advises promoting significant team achievements on LinkedIn and other major social platforms. "This will aid recruiting and retention efforts."


Deepfake research is growing and so is investment in companies that fight it

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit. ... Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams. As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills. ... With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.


Building Finance Apps: Best Practices and Unique Challenges

By making compliance a central focus from day one of the development process, you maximize your ability to meet compliance needs, while also avoiding the inefficient process of retrofitting compliance features into the app later. For example, implementing transaction reporting after the rest of the app has been built is likely to be a much heavier lift than designing the app from the start to support that feature. ... The tech stack (meaning the set of frameworks and tools you use to build and run your app) can have major implications for how easy it is to build the app, how secure and reliable it is, and how well it integrates with other systems or platforms. For that reason, you'll want to consider your stack carefully, and avoid the temptation to go with whichever frameworks or tools you know best or like the most. ... Given the plethora of finance apps available today, it can be tempting to want to build fancy interfaces or extravagant features in a bid to set your app apart. In general, however, it's better to adopt a minimalist approach. Build the features your users actually want — no more, no less. Otherwise, you waste time and development resources, while also potentially exposing your app to more security risks.


OVHcloud blames record-breaking DDoS attack on MikroTik botnet

Earlier this year, OVHcloud had to mitigate a massive packet rate attack that reached 840 Mpps, surpassing the previous record holder, an 809 Mpps DDoS attack targeting a European bank, which Akamai mitigated in June 2020. ... OVHcloud says many of the high packet rate attacks it recorded, including the record-breaking attack from April, originate from compromised MirkoTik Cloud Core Router (CCR) devices designed for high-performance networking. The firm identified, specifically, compromised models CCR1036-8G-2S+ and CCR1072-1G-8S+, which are used as small—to medium-sized network cores. Many of these devices exposed their interface online, running outdated firmware and making them susceptible to attacks leveraging exploits for known vulnerabilities. The cloud firm hypothesizes that attackers might use MikroTik's RouterOS's "Bandwidth Test" feature, designed for network throughput stress testing, to generate high packet rates. OVHcloud found nearly 100,000 Mikrotik devices that are reachable/exploitable over the internet, making up for many potential targets for DDoS actors.


Set Goals and Measure Progress for Effective AI Deployment

Combining human expertise and AI capabilities to augment decision-making is an essential tenet in responsible AI principles. The current age of AI adoption should be considered a “coming together of humans and technology.” Humans will continue to be the custodians and stewards of data, which ties into Key Factor 2 about the need for high-quality data, as humans can help curate the relevant data sets to train an LLM. This is critical, and the “human-in-the-loop” facet should be embedded in all AI implementations to avoid completely autonomous implementations. Apart from data curation, this allows humans to take more meaningful actions when equipped with relevant insights, thus achieving better business outcomes. ... Addressing bias, privacy, and transparency in AI development and deployment is the pivotal metric in measuring its success. Like any technology, laying out guardrails and rules of engagement are core to this factor. Enterprises such as Accenture implement measures to detect and prevent bias in their AI recruitment tools, helping to ensure fair hiring practices. 


Site Reliability Engineering State of the Union for 2024

Automation remains at the core of SRE, with tools for container orchestration and infrastructure management playing a critical role. The adoption of containerization technologies such as Docker and Kubernetes has facilitated more efficient deployment and scaling of applications. In 2024, we can expect further advancements in automation tools that streamline the orchestration of complex microservices architectures, thereby reducing the operational burden on SRE teams. Infrastructure automation and orchestration are pivotal in the realm of SRE, enabling teams to manage complex systems with enhanced efficiency and reliability. The evolution of these technologies, particularly with the advent of containerization and microservices, has significantly transformed how applications are deployed, managed and scaled. ... With the increasing prevalence of cyberthreats and the tightening of regulatory requirements, security and compliance have become integral aspects of SRE. Automated tools for compliance monitoring and enforcement will become indispensable, enabling organizations to adhere to industry standards while minimizing the risk of data breaches and other security incidents.


5 Steps to Refocus Your Digital Transformation Strategy for Strategic Advancement

A strategy built around customer value provides measurable outcomes and drives deeper engagement and loyalty. The digital landscape is riddled with risks and opportunities due to rapid technological advancements, especially in data-centric AI. Businesses must stay agile, continually evaluating the risks and rewards of new technologies while maintaining a sharp focus on how these enhancements serve their customer base. ... Organizations with a customer advisory board should leverage it to gain insights directly from those who use their services or products. Engaging customers from the early stages of planning ensures that their feedback and needs directly influence the transformation strategy, leading to more accurate and beneficial implementations. ... One significant mistake IT leaders make is prioritizing technology over customer needs. While technology is a crucial enabler, it should not dictate the strategy. Instead, it should support and enhance the strategy’s core aim — serving the customer. IT leaders must ensure that digital initiatives align with broader business objectives and directly contribute to customer satisfaction and business efficiency.


OpenSSH Vulnerability “regreSSHion” Grants RCE Access Without User Interaction, Most Dangerous Bug in Two Decades

The good news about the OpenSSH vulnerability is that exploitation attempts have not yet been spotted in the wild. Successfully taking advantage of the exploit required about 10,000 tries to win a race condition using 100 concurrent connections under the researcher’s test conditions, or about six to eight hours to RCE due to obfuscation of ASLR glibc’s address. The attack will thus likely be limited to those wielding botnets when it is uncovered by threat actors. Given the large amount of simultaneous connections needed to induce the race condition, the RCE is also very open to being detected and blocked by firewalls and networking monitoring tools. Qualys’ immediate advice for mitigation also includes updating network-based access controls and segmenting networks where possible. ... “While there is currently no proof of concept demonstrating this vulnerability, and it has only been shown to be exploitable under controlled lab conditions, it is plausible that a public exploit for this vulnerability could emerge in the near future. Hence it’s strongly advised to patch this vulnerability before this becomes the case”.


New paper: AI agents that matter

So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile. One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). ... Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.



Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown

Daily Tech Digest - July 03, 2024

“The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement,” EY said in its report. The future of work has shifted due to genAI in particular, enabling work to be done equally well and securely across remote, field, and office environments, according to EY. ... “We can say that the most common worry is that AI will impact an employee’s role – either making it obsolete entirely or changing it in a way which concerns the employee, For example, taking some of the challenge or excitement out of it,” Harris said. “And the point is, these perspectives are already having an impact – irrespective of what the future really holds.” Harris said in another Gartner survey, employees indicated they were less likely to stay with an organization due to concerns about AI-driven job loss. ... Organizations can also overcome employee AI fears and build trust by offering training or development on a range of topics, such as how AI works, how to create prompts and effectively use AI, and even how to evaluate AI output for biases or inaccuracies. And employees want to learn.


Importance of security-by-design for IT and OT systems in building a security resilient framework

Regular security testing, vulnerability assessments, penetration testing, and compliance audits are vital for identifying vulnerabilities and potential attack vectors. This proactive approach allows organizations to rectify weaknesses, enhance security posture, and protect their systems effectively. For OT systems, specialized testing methods that support the unique requirements of industrial environments are necessary. ... Developers should adopt secure coding practices, including coding standards, input validation, secure data storage, secure communication protocols, code reviews, and automated security testing. These measures help identify and mitigate security issues during the development phase, eliminating common vulnerabilities. Additionally, training developers in secure coding techniques and fostering a security-centric culture within development teams are equally crucial. ... Regular software updates and effective patch management are essential to address newly identified security vulnerabilities. Staying current with security patches and updates for all software components is crucial. 


The Case For a Managed Career for Architects

The case for making architecture a managed profession stems from a few critical factors: Rising Levels of Societal Impact: The impact of technology is growing daily. This impact and difficulties of technology, not just threats but the daily interaction of people with technology like subscription models, social media, passwords, banking etc, is increasingly important to the average person. ... Regulatory Pressure: Increasing pressure is coming to bear on all aspects of technology as it relates to government and regulation. From things like sustainability, privacy and accountability to impacts to purchasing, monopolies, identity and security. The more prevalent technology becomes in society, the more regulation that needs to be met to ensure appropriate use. ... New Technology Opportunities/Threats: Avoiding catastrophes in both small and large scopes is one function of modern professions. Non-professionals are not allowed to play with dangerous research or deploy dangerous products in most fields. ... Severe Demand/Quality Problems: The demand for high-quality architecture professionals is growing daily. This demand can no longer be met in the role-based education methods that were developed in the early rush of the 90’s. 


Is your bank’s architecture trapped in the past? It’s time to recompose it

The complexity of banking modernisation, particularly the cost and resource intensiveness associated with a big-bang approach, is one of several reasons many banks may still be stuck with a legacy technology platform. With contemporary architectural techniques such as the “strangulation pattern”, banks can achieve the desired modernisation in a streamlined manner. The strangulation pattern is a software migration strategy involving forming a new software layer, the “strangler”, around the legacy banking system. This strangler interacts with the core system’s data and functionality through well-defined APIs. Gradually, new functionalities are developed within the strangler layer in parallel with the legacy systems, allowing the bank to independently test and refine the new functionalities. Over time, more and more functionalities, based on needs and complexity, are migrated from the core system to the strangler layer. As a result, the core system becomes less and less critical and can be retired entirely or kept as a backup system. Not only does this approach minimise risk compared to a big-bank switchover, but it also allows business operations to continue with minimal disruption. 


Cyberinsurance Premiums are Going Down: Here’s Why and What to Expect

The insurance cycle is described in Wikipedia as “a term describing the tendency of the insurance industry to swing between profitable and unprofitable periods over time…” Such swings are common to all businesses but are particularly relevant to insurance. Within this insurance cycle, the swing is between a ‘hard market’ and a ‘soft market’. Howden defines it thus: “In simple terms, [a soft market] is when there is a lot of insurance capacity, and rates are low. Conversely, a hard market is when insurance capacity is reduced and premium rates are high.” Noticeably, the state of the insured does not figure. “Insurance markets (cyber, property, D&O, etc) tend to run through rating cycles,” explains George Mawdsley, head of risk solutions at DeNexus. “What makes cyber unique is that there is material uncertainty around how big the ‘Big Storms’ can get, which means capital allocators will make conservative assumptions on max downside or will not invest. Given the strong growth projections (demand) for the cyber insurance market, we expect this dynamic to drive up prices over the long term.”


Productivity and patience: How GitHub Copilot is expanding development horizons

Copilot shines in "implementing straightforward, well-defined components in terms of performance and other non-functional aspects. Its efficiency diminishes when addressing complex bugs or tasks requiring deep domain expertise." ... Copilot's greatest challenge is context, he pointed out. "Code and code development has a lot to do with the context that you're dealing with. Are you in a legacy code base or not? Are you in COBOL or in C++ or in JavaScript or TypeScript? It's a lot of context that needs to happen for the quality of that code to be high and for you to accept it." ... The impact on software development from AI will be subtler: "What if a text box is all they needed to be able to accomplish something that creates software and something that they could then derive value from?" For example, said Rodriguez: "If I could say very quickly in my phone, 'Hey, I am thinking of talking to my daughter about these things. Can you give me the last three X, Y, and Z articles and then just create a little program that we could play as a game?' You could envision Copilot being able to help you with that in the future."


How Part-Time Senior Leaders Can Help Your Business

It’s not only CEOs that benefit. With their deep functional expertise, fractionals often serve as advisors and mentors to other C-suite leaders. Barry Hurd, a fractional chief marketing officer (CMO), describes his role as providing expert counsel to full-time CMOs: “I’ve worked with a couple of CMOs who have hired me to simply double-check their work. I act as the executive coach, bringing my 30 years of wisdom and experience.” Similarly, Katie Walter, another fractional CMO, shares an experience where she supported an executive transitioning into a marketing leadership role: “She had never led the marketing function before, so the expectation was that I would work alongside her and help her to become more effective. In this case, I was introduced to the team as her coach.” The benefits also extend to the organization as a whole. Because fractional leaders often juggle multiple roles, they gain access to a wide professional network and are exposed to diverse working methods. This unique position allows them to introduce new ideas and practices among the organizations they serve. 


How the CISO Can Transform Into a True Cyber Hero

Operationalizing readiness, response, and recovery is where the rubber meets the road for the CISO. Plans, processes, and technologies underpin operations, but they each rely on people. Tabletop exercises that focus only on technical response activities strengthen only one "muscle group" of the organization. Consider a different kind of cyber exercise — a war game that involves the entire organization. By exercising the incident management plan with a broader constituency of stakeholders, organizations can build "muscle memory," test communication channels, and identify decisions or risks based on a given scenario. As part of the war game, the recovery team should run through the sequential restoration. By socializing the order in which operations will return after a disruption, the team can reduce the number of "Is it back online yet?" queries received during a real incident.  ... There's an old joke that "CISO" stands for "career is seriously over." But today’s CISO has a serious role to play as a hero for their organization. It is a simple matter of evolving from a primarily technical role to a role that incorporates empowering their human peers and stakeholders to become greater collaborators in cyber-incident response, recovery, and readiness.


How CISOs can protect their personal liability

One of the most effective and methodical methods of documentation that a CISO can maintain is a risk register that identifies existing cyber risk and records risk acceptance by relevant business stakeholders. This can help bring greater visibility into cyber risk to the board and it certainly helps CISOs to protect themselves. “In order to run a security program, you have to have a risk register. It’s like table stakes,” says Greg Notch CISO of Expel, a managed detection and response firm, and a longtime security veteran who served as CISO for the National Hockey League prior to this job. ... Even with rock solid policies, procedures, and documentation, CISOs should also seek to establish legal protection through tools like indemnification agreements, employment contractual terms, and the right level of insurance protection. Kolochenko says CISOs that are unsure of their protections should proactively reach out to their general counsel and ask them about all of their duties, liabilities, and protections. If something sounds unfavorable, push back, he says.


How New Frameworks for Cyber Metrics are Reshaping Boardroom Conversations

Ideally, boards have one or more sitting executives with risk experience, but the reality is that boards primarily consist of executives with a non-technical understanding of risk management methods. Risk and cybersecurity information must be always conveyed in easy-to-understand, business-oriented language. Start by quantifying risk in monetary or dollar terms. Board members may not understand the technical details of Monte Carlo simulations or probabilistic risk assessments, but they do need to understand the potential impact of risk on the business in the most efficient way. Quantification can help anyone understand how the business anticipates risk, prioritizes risk controls, and takes preventative action against risk. Tailor risk information to board members, depending on their expertise and the board report’s purpose. There is no one-size-fits-all approach to reporting. CISOs can segregate risk metrics into categories, like security, financial, third-party, or employee awareness risks. Grouping information together helps non-technical executives understand how risks are interconnected and what’s being done to anticipate these risks. 



Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - July 01, 2024

The dangers of voice fraud: We can’t detect what we can’t see

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious. Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use. Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye. On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.


Provider or partner? IT leaders rethink vendor relationships for value

Vendors achieve partner status in McDaniel’s eyes by consistently demonstrating accountability and integrity; getting ahead of potential issues to ensure there’s no interruptions or problems with the provided products or services; and understanding his operations and objectives. ... McDaniel, other CIOs, and CIO consultants agree that IT leaders don’t need to cultivate partnerships with every vendor; many, if not most, can remain as straight-out suppliers, where the relationship is strictly transactional, fixed-fee, or fee-for-service based. That’s not to suggest those relationships can’t be chummy, but a good personal rapport between the IT team and the supplier’s team is not what partnership is about. A provider-turned-partner is one that gets to know the CIO’s vision and brings to the table ways to get there together, Bouryng says. ... As such, a true partner is also willing to say no to proposed work that could take the pair down an unproductive path. It’s a sign, Bouryng says, that the vendor is more interested in reaching a successful outcome than merely scheduling work to do.


In the AI era, data is gold. And these companies are striking it rich

AI vendors have, sometimes controversially, made deals with organizations like news publishers, social media companies, and photo banks to license data for building general-purpose AI models. But businesses can also benefit from using their own data to train and enhance AI to assist employees and customers. Examples of source material can include sales email threads, historical financial reports, geographic data, product images, legal documents, company web forum posts, and recordings of customer service calls. “The amount of knowledge—actionable information and content—that those sources contain, and the applications you can build on top of them, is really just mindboggling,” says Edo Liberty, founder and CEO of Pinecone, which builds vector database software. Vector databases store documents or other files as numeric representations that can be readily mathematically compared to one another. That’s used to quickly surface relevant material in searches, group together similar files, and feed recommendations of content or products based on past interests. 


Machine Vision: The Key To Unleashing Automation's Full Potential

Machine vision is a class of technologies that process information from visual inputs such as images, documents, computer screens, videos and more. Its value in automation lies in its ability to capture and process large quantities of documents, images and video quickly and efficiently in quantities and speeds far in excess of human capability. ... Machine vision based technologies are even becoming central to the creation of automations themselves. For example, instead of relying on human workers to describe processes that are being automated when designing automations, recordings of the process to be automated are created and then machine vision software, combined with other technologies, is used to capture the process end-to-end and then provide the input to automating a lot of the work needed to program the digital workers (bots). ... Machine vision is integral to maximizing the impact of advanced automation technologies on business operations and paving the way for increased capabilities in the automation space.


Put away your credit cards — soon you might be paying with your face

Biometric purchases using facial recognition are beginning to gain some traction. The restaurant CaliExpress by Flippy, a fully automated fast-food restaurant, is an early adopter. Whole Food stores offer pay-by-palm, an alternative biometric to facial recognition. Given that they are already using biometrics, facial recognition is likely to be available in their stores at some point in the future. ... Just as credit and debit cards have overtaken cash as the dominant means to make purchases, biometrics like facial recognition could eventually become the dominant way to make purchases. There will however be actual costs during such a transition, which will largely be absorbed by consumers in higher prices. The technology software and hardware required to implement such systems will be costly, pushing it out of reach for many small- and medium-size businesses. However, as facial recognition systems become more efficient and reliable, and losses from theft are reduced, an equilibrium will be achieved that will make such additional costs more modest and manageable to absorb.


Technologists must be ready to seize new opportunities

For technologists, this new dynamic represents a profound (and daunting) change. They’re being asked to report on application performance in a more business-focussed, strategic way and to engage in conversations around experience at a business level. They’re operating outside their comfort zone, far beyond the technical reporting and discussions they’ve previously encountered. Of course, technologists are used to rising to a challenge and pivoting to meet the changing needs of their organisations and their senior leaders. We saw this during the pandemic, many will (rightly) be excited about the opportunity to expand their skills and knowledge, and to elevate their standing within their organisations. The challenge that many technologists face, however, is that they currently don’t have the tools and insights they need to operate in a strategic manner. Many don’t have full visibility across their hybrid environments and they’re struggling to manage and optimise application availability, performance and security in an effective and sustainable manner. They can’t easily detect issues, and even when they do, it is incredibly difficult to quickly understand root causes and dependencies in order to fix issues before they impact end user experience. 


Vulnerability management empowered by AI

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats. ... AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action. Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats. Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.


Why every company needs a DDoS response plan

Given the rising number of DDoS attacks each year and the reality that DDoS attacks are frequently used in more sophisticated hacking attempts to apply maximum pressure on victims, a DDoS response plan should be included in every company’s cybersecurity tool kit. After all, it’s not just a temporary lack of access to a website or application that is at risk. A business’s failure to withstand a DDoS attack and rapidly recover can result in loss of revenue, compliance failures, and impacts on brand reputation and public perception. Successful handling of a DDoS attack depends entirely on a company’s preparedness and execution of existing plans. Like any business continuity strategy, a DDoS response plan should be a living document that is tested and refined over the years. It should, at the highest level, consist of five stages, including preparation, detection, classification, reaction, and postmortem reflection. Each phase informs the next, and the cycle improves with each iteration.


Reduce security risk with 3 edge-securing steps

Over the past several years web-based SSL VPNs have been targeted and used to gain remote access. You may even want to consider evaluating how your firm allows remote access and how often your VPN solution has been attacked or at risk. ... “The severity of the vulnerabilities and the repeated exploitation of this type of vulnerability by actors means that NCSC recommends replacing solutions for secure remote access that use SSL/TLS with more secure alternatives,” the authority says. “The NCSC recommends internet protocol security (IPsec) with internet key exchange (IKEv2). Other countries’ authorities have recommended the same.” ... Pay extra attention to how credentials that need to be accessed are protected from unauthorized access. Ensure that you use best practice processes to secure passwords and ensure that each user has appropriate passwords and access accordingly. ... When using cloud services, you need to ensure that only those vendors you trust or that you have thoroughly vetted have access to your cloud services. 

The real key to machine learning success is something that is mostly missing from genAI: the constant tuning of the model. “In ML and AI engineering,” Shankar writes, “teams often expect too high of accuracy or alignment with their expectations from an AI application right after it’s launched, and often don’t build out the infrastructure to continually inspect data, incorporate new tests, and improve the end-to-end system.” It’s all the work that happens before and after the prompt, in other words, that delivers success. For genAI applications, partly because of how fast it is to get started, much of this discipline is lost. ... As with software development, where the hardest work isn’t coding but rather figuring out which code to write, the hardest thing in AI is figuring out how or if to apply AI. When simple rules need to yield to more complicated rules, Valdarrama suggests switching to a simple model. Note the continued stress on “simple.” As he says, “simplicity always wins” and should dictate decisions until more complicated models are absolutely necessary.



Quote for the day:

“The vision must be followed by the venture. It is not enough to stare up the steps - we must step up the stairs.” -- Vance Havner

Daily Tech Digest - June 30, 2024

The Unseen Ethical Considerations in AI Practices: A Guide for the CEO

AI’s “black box” problem is well-known, but the ethical imperative for transparency goes beyond just making algorithms understandable and its results explainable. It’s about ensuring that stakeholders can comprehend AI decisions, processes, and implications, guaranteeing they align with human values and expectations. Recent techniques, such as reinforcement learning from human feedback (RLHF) that aligns AI outcomes to human values and preferences, confirm that AI-based systems behave ethically. This means developing AI systems in which decisions are in accordance with human ethical considerations and can be explained in terms that are comprehensible to all stakeholders, not just the technically proficient. Explainability empowers individuals to challenge or correct erroneous outcomes and promotes fairness and justice. Together, transparency and explainability uphold ethical standards, enabling responsible AI deployment that respects privacy and prioritizes societal well-being. This approach promotes trust, and trust is the bedrock upon which sustainable AI ecosystems are built.Long-


Cyber resilience - how to achieve it when most businesses – and CISOs – don’t care

Organizations should ask themselves some serious, searching questions about why they are driven to keep doing the same thing over and over again – while spending millions of dollars in the process. As Bathurst put it: Why isn't security by design built in at the beginning of these projects, which are driving people to make the wrong decisions – decisions that nobody wants? Nobody wants to leave us open to attack. And nobody wants our national health infrastructure, ... But at this point, we should remind ourselves that, despite that valuable exercise, both the Ministry of Defence and the NHS have been hacked and/or subjected to ransomware attacks this year. In the first case, via a payroll system, which exposed personal data on thousands of staff, and in the second, via a private pathology lab. The latter incursion revealed patient blood-test data, leading to several NHS hospitals postponing operations and reverting to paper records. So, the lesson here is that, while security by design is essential for critical national infrastructure, resilience in the networked, cloud-enabled age must acknowledge that countless other systems, both upstream and downstream, feed into those critical ones.


Prominent Professor Discusses Digital Transformation, the Future of AI, Tesla, and More

“Customers are always going to have some challenges, and there are constant new technological trends evolving. Digital transformation is about intentionally moving towards making the experience more personalized by weaving new technology applications to solve customer challenges and deliver value,” shared Krishnan. However, as machine learning and GenAI help companies personalize their products and services, the tools themselves are also becoming more niche. “I think we’ll move to more domain and industry-specific generative AI and large language models. The healthcare industry will have an LLM, consumer packaged goods, education, etc,” shared Krishnan. “However, because companies will protect their own data, every large organization will create its own LLM with the private data. That’s why generative AI is interesting because it can actually get to be more personalized while also leveraging the broader knowledge. Eventually, we may all have our own individual GPTs.” ... Although new technologies such as GenAI and machine learning have had an immense impact in such a short time, Krishnan warns that guardrails are necessary, especially as our use of these tools becomes more essential.


Enhancing Your Company’s DevEx With CI/CD Strategies

Cognitive load is the amount of mental processing necessary for a developer to complete a task. Companies generally have one programming language that they use for everything. Their entire toolchain and talent pool is geared toward it for maximum productivity. On the other hand, CI/CD tools often have their own DSL. So, when developers want to alter the CI/CD configurations, they must get into this new rarely-used language. This becomes a time sink as well as causes a high cognitive load. One of the ways to avoid giving developers high cognitive load tasks without reason is to pick CI/CD tools that use a well-known language. For example, the data serialization language YAML — not always the most loved — is an industry standard that developers would know how to use. ... In software engineering, feedback loops can be measured by how quickly questions are answered. Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. 


Digital Accessibility: Ensuring Inclusivity in an Online World

"It starts by understanding how people with disabilities use your online platform," he said. While the accessibility issues faced by people who are blind receive considerable attention, it's crucial to address the full spectrum of disabilities that affect technology use, including auditory, cognitive, neurological, physical, speech, and visual disabilities, Henry added. ... The key is to review accessibility during content creation with a diverse group of people and address their feedback in iterations early and often. Bhowmick added that accessibility testing should always be run according to a structured testing script and mature testing methodologies to ensure reliable, reproducible, and sustainable test results. It is important to run accessibility testing during every stage of the software lifecycle: during design, before handing over the design to development, during development, and after development. A professional and thorough testing should take place before releasing the product to customers, Bhowmick said, and the test results should be made available in an accessibility conformance report (ACR) following the Voluntary Product Accessibility Template (VPAT) format.


How Cloud-Native Development Benefits SaaS

Cloud-native practices, patterns, and technologies enhance the benefits of SaaS and COTS while reducing the inherent negatives by:Providing an extensible framework for adding new capabilities to commercial applications without having to customize the core product. Leveraging API and event-driven architecture to bypass the need for custom data integrations. Still offloading the complexity of most infrastructure and security concerns to a provider while gaining additional flexibility in scale and resilience implementation. Enabling opportunities to innovate core business systems with emerging technologies such as generative AI. Enterprises relying on SaaS or COTS still need the flexibility to meet their ever-evolving business requirements. As we have seen with advances in AI over the past year, change and opportunity can arrive quickly and without warning. Chances are that your organization is already on a journey to cloud-native maturity, so take advantage of this effort by implementing technologies and patterns, like leveraging event-driven architectures and serverless functions to extend your commercial applications rather than customizing or replacing them.


Cybersecurity as a Service Market: A Domain of Innumerable Opportunities

Although traditional cybersecurity differs from cybersecurity as a service. As per the budget, size, and regulatory compliance requirements, several approaches are required. Organizations are finding it tedious to rely completely on themselves. The conventional method of fabricating an internal security team is to hire an experienced security staff who are dedicated to performing cyber security duties. While CSaaS is an option where the company outsources the security facility. A survey found that almost 72.1% of businesses find CSaaS solutions critical for their customer strategy. Let us now understand cyber security as a service market growth aspect. ... Some of the challenges in the market growth are lack of training and inadequate workforce, limited security budget among SMEs, and lack of interoperability with the information. The market in North America currently accounts for the maximum share of the revenue of the worldwide market. The growth of the market can be attributed to the high level of digitalization and the surge in the number of connected devices in the countries is projected to remain growth-propelling factors. 


Top 5 (EA) Services Every Team Lead Should Know

The topic of sustainability is on everyone’s priority list these days. It has become an integral part of sociopolitical and global concepts. Not to mention, more and more customers are asking for sustainable products and services. Or alternatively, they only want to buy from companies that act and operate sustainably themselves. Sustainability must therefore be on the strategic agenda of every company. ... To effectively collaborate with your enterprise IT and ensure the best possible support while you’re making IT-related investment decisions, your IT service providers require feedback. For this, your list of software applications must be known. Deficits and opportunities for improvement need to be identified and, above all, a coordinated investment strategy for your IT services is a must. It has to be clear how you can use your IT budget in the most efficient way. ... What do all these different services have to do with EA? A lot. If the above-mentioned services are understood as EA services, their results form a valuable contribution to the creation of a holistic view of your company – the enterprise architecture.


Ensuring Comprehensive Data Protection: 8 NAS Security Best Practices

NAS devices are convenient to use as shared storage, which means they should be connected to other nodes. Normally, those nodes are the machines inside an organization’s network. However, the growing number of gadgets per employee can lead to unintentional external connections. Internet of Things (IoT) devices are a separate threat category. Hackers can target these devices and then use them to propagate malicious codes inside corporate networks. If you connect such a device to your NAS, you risk compromising NAS security and then suffering a cyberattack. ... Malicious software remains a ubiquitous threat to any node connected to the network. Malware can steal, delete, and block access to NAS data or intercept incoming and outgoing traffic. Furthermore, the example of Stuxnet shows that powerful computer worms can disrupt and disable IT hardware or even entire production clusters. Insider threats. When planning an organization’s cybersecurity, IT experts reasonably focus on outside threats.


How to design the right type of cyber stress test for your organisation

The success of a cyber stress test largely depends on the realism and relevance of the scenarios and attack vectors used. These should be based on a thorough understanding of the current threat landscape, industry-specific risks, and emerging trends. Scenarios may range from targeted phishing campaigns and ransomware attacks to sophisticated, state-sponsored intrusions. By selecting scenarios that are plausible and aligned with your organisation’s risk profile, you can ensure that the stress test provides valuable insights and prepares your team for real-world challenges. ... A well-designed cyber stress test should encompass a range of activities, from table-top exercises and digital simulations to red team-blue team engagements and penetration testing. This multi-faceted approach allows you to assess the organisation’s capabilities across various domains, including detection, investigation, response, and recovery. Additionally, the stress test should include a thorough evaluation process, with clearly defined success criteria and mechanisms for gathering feedback and lessons learned.



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman