Daily Tech Digest - March 23, 2023

10 cloud mistakes that can sink your business

It’s a common misconception that cloud migration always leads to immediate cost savings. “In reality, cloud migration is expensive, and not having a full and complete picture of all costs can sink a business,” warns Aref Matin, CTO at publishing firm John Wiley & Sons. Cloud migration often does lead to cost savings, but careful, detailed planning is essential. Still, as the cloud migration progresses, hidden costs will inevitably appear and multiply. “You must ensure at the start of the project that you have a full, holistic cloud budget,” Matin advises. Cloud costs appear in various forms. Sometimes they’re in plain sight, such as the cost of walking away from an existing data facility. Yet many expenses aren’t so obvious. ... A major challenge facing many larger enterprises is leveraging data spread across disparate systems. “Ensuring that data is accessible and secure across multiple environments, on-premises as well as on applications running in the cloud, is an increasing headache,” says Darlene Williams, CIO of software development firm Rocket Software.


Developed countries lag emerging markets in cybersecurity readiness

The drastic difference in cybersecurity preparedness between developed and developing nations is likely because organizations in emerging markets started adopting digital technology more recently compared to their peers in developed markets. “That means many of these companies do not have legacy systems holding them back, making it relatively easier to deploy and integrate security solutions across their entire IT infrastructure,” the report said, adding that technology debt — the estimated cost or assumed impact of updating systems — continues to be a major driver of the readiness gap. The Cisco Cybersecurity Readiness Index categorizes companies in four stages of readiness — beginner, formative, progressive, and mature. ... Identity management was recognized as the most critical area of concern. Close to three in five respondents, or 58% of organizations, were either in the formative or beginner category for identity management. However, 95% were at least at some stage of deployment with an appropriate ID management application, the report said.


Observability will transform cloud security

Is this different than what you’re doing today for cloud security? Cloud security observability may not change the types or the amount of data you’re monitoring. Observability is about making better sense of that data. It’s much the same with cloud operations observability, which is more common. The monitoring data from the systems under management is mostly the same. What’s changed are the insights that can now be derived from that data, including detecting patterns and predicting future issues based on these patterns, even warning of problems that could emerge a year out. ... Cloud security observability looks at a combination of dozens of data streams for a hundred endpoints and finds patterns that could indicate an attack is likely to occur in the far or near future. If this seems like we are removing humans from the process of making calls based on observed, raw, and quickly calculated data, you’re right. We can respond to tactical security issues, such as a specific server under attack, with indicating alerts, which means it should block the attacking IP address.


Operational Resilience: More than Disaster Recovery

Disaster recovery is fairly narrow in its definition and typically viewed in a small timeframe. Operational resilience is much broader, including aspects like the sort of governance you’ve put in place; how you manage operational risk management; your business continuity plans; and cyber, information, and third-party supplier risk management. In other words, disaster recovery plans are chiefly concerned with recovery. Operational resilience looks at the bigger picture: your entire ecosystem and what can be done to keep your business operational during disruptive events. ... Part of the issue is that cyber is still seen as special. The discussion always seems to conclude with the assumption that the security team or IT department is managing a particular risk, so no one else needs to worry about it. There is a need to demystify cybersecurity. It’s only with the proper business understanding and risk ownership that you can put proper resilience mechanisms in place.


Nvidia builds quantum-classical computing system with Israel’s Quantum Machines

The DGX Quantum deploys Nvidia’s Grace Hopper superchip and its technology platform for hybrid quantum-classical computers coupling so-called graphics processing units (GPUs) and quantum processing units (QPUs) in one system. It is supported by Quantum Machine’s flagship OPX universal quantum control system designed to meet the demanding requirements of quantum control protocols, including precision, timing, complexity, and ultra-low latency, according to the Israeli startup. The combination allows “researchers to build extraordinarily powerful applications that combine quantum computing with state-of-the-art classical computing, enabling calibration, control, quantum error correction and hybrid algorithms,” Nvidia said in a statement. Tech giants like Google, Microsoft, IBM, and Intel are all racing to make quantum computing more accessible and build additional systems, while countries like China, the US, Germany, India, and Japan are also pouring millions into developing their own quantum abilities.


Leveraging Data Governance to Manage Diversity, Equity, and Inclusion (DEI) Data Risk

In organizations with a healthy data culture, the counterpart to compliance is data democratization. Democratization is the ability to make data accessible to the right people at the right time in compliance with all relevant legal, regulatory, and contractual obligations. Leaders delegate responsibility to stewards for driving data culture by democratizing data so that high-quality data is available to the enterprise in a compliant manner. Such democratized data enables frontline action by placing data into the hands of people who are solving business problems. Stewards democratize data by eliminating silos and moving past the inertia that develops around sensitive data sources. An essential aspect of democratization, therefore, is compliance. Stewards will not be able to democratize data without a clear ability to assess and manage risk associated with sensitive data. That said, it is critical that DEI advocates limit democratization of DEI data, especially at the outset of their project or program. 


The Future of Data Science Lies in Automation

Much data science work is done through machine learning (ML). Proper employment of ML can ease the predictive work that is most often the end goal for data science projects, at least in the business world. AutoML has been making the rounds as the next step in data science. Part of machine learning, outside of getting all the data ready for modeling, is picking the correct algorithm and fine-tuning (hyper)parameters. After data accuracy and veracity, the algorithm and parameters have the highest influence on predictive power. Although in many cases there is no perfect solution, there’s plenty of wiggle room for optimization. Additionally, there’s always some theoretical near-optimal solution that can be arrived at mostly through calculation and decision making. Yet, arriving at these theoretical optimizations is exceedingly difficult. In most cases, the decisions will be heuristic and any errors will be removed after experimentation. Even with extensive industry experience and professionalism, there is just too much room for error.


What NetOps Teams Should Know Before Starting Automation Journeys

Like all people, NetOps professionals enjoy the results of a job well done. So, while the vision of their automation journey may be big, it’s important to start with a small, short-term project that can be completed quickly. There are a couple of benefits to this approach:Quick automation wins will give NetOps teams confidence for future projects. Projects like this can generate data and feedback that NetOps teams can convert into learnings and insights for the next project. This approach can also be applied to bigger, more complex automation projects. Instead of taking on the entire scale of the project at once, NetOps teams can break it down into smaller components. ... The advantages of this approach are the same as with the quick-win scenario: There is a better likeliness of success and more immediate feedback and data to guide the NetOps teams through this entire process. Finally, as talented as most NetOps teams are, they are not likely to have all of the automation expertise in-house at any given time. 


Reducing the Cognitive Load Associated with Observability

Data points need to be filtered and transformed in order to generate the proper signals. Nobody wants to be staring at a dashboard or tailing logs 24/7, so we rely on alerting systems. When an alert goes off, it is intended for human intervention, which means transforming the raw signal into an actionable event with contextual data: criticality of the alert, environments, descriptions, notes, links, etc. It must be enough information to direct the attention to the problem, but not too much to drown in noise. Above all else, a page alert should require a human response. What else could justify interrupting an engineer from their flow if the alert is not actionable? When an alert triggers, analysis begins. While we eagerly wait for anomaly detection and automated analysis to fully remove the human factor from this equation, we can use a few tricks to help our brains quickly identify what’s wrong. ... Thresholds are required for alert signals to trigger. When it comes to visualization, people who investigate and detect anomaly need to consider these thresholds too. Is this value in data too low or unexpectedly high?


The Urgent Need for AI in GRC and Security Operations: Are You Ready to Face the Future?

Another area where AI tools are transforming the IT industry is security operations. Businesses face an ever-increasing number of cyberthreats, and it can be challenging to stay ahead of these threats. AI tools can help by automating many security operations, such as threat detection and incident response. They can also help with risk assessment by analyzing large amounts of data and identifying potential vulnerabilities. The benefits of AI tools in the IT industry are clear. By automating processes and improving decision-making, businesses can save time and money while reducing the risk of errors. AI tools can also help businesses to be more agile and responsive to changes in the market. However, the use of AI tools in the IT industry also presents some challenges. One of the key challenges is the need for specialized technical expertise. While AI tools can be user-friendly, businesses still need to have specialized expertise to use the tools effectively.



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - March 21, 2023

CFO Priorities This Year: Rethinking the Finance Function

Marko Horvat, Gartner VP of research, adds CFOs must transition away from optimization and start thinking about transformation. “Making things faster, more accurate, and with less effort has benefits, but each round of improvement brings diminishing returns,” he says. “CFOs must start thinking about ways to transform the function to build and enhance capabilities, such as advanced data and analytics, in order to truly unlock more value from the finance function.” Sehgal says CFOs should be asking questions including, how do we create a futuristic vision for finance? Should short-term gains override longer-term benefits? And how do we fund digital transformation with the current pressures? “CFOs are focused on elevating the role of finance in the organization to be a value integrator across the enterprise, as well as enhancing value through new strategies that not only support development but that also promote innovations for capital allocation,” he explains.


Build Software Supply Chain Trust with a DevSecOps Platform

When building an application, developers, platform operators and security professionals want to monitor vulnerabilities throughout the software supply chain. The challenge comes when multiple vulnerability scanners are used at different stages in the pipeline and different teams are notified and required to take action without proper coordination. A security-focused application platform can build in scan orchestration to not only detect vulnerabilities but also to map those findings to a workload. This feature allows developers to identify issues throughout the life cycle of their applications and help them resolve issues, shifting left the responsibility with a higher degree of automation. Moreover, the platform can build trust with security analysts by showing the performance of application developers and helping them understand the risk that teams are facing. Once a platform detects these vulnerabilities, both at build time and at runtime, it needs to help developers triage and remediate them. 


Developers, unite! Join the fight for code quality

Writing good code is a craft as much as any other, and should be regarded as such. You have every right to advocate for an environment and an operational model that respect the intricacies of what you do and the significance of the outcome. It’s important to value, and feel valued for, what you do. And not just for your own immediate happiness—it’s also a long-term investment in your career. Making things you don’t think are any good tends to wear on the psyche, which doesn’t exactly feed into a more motivated workday. In fact, a study conducted by Oxford University’s Saïd Business School found that happy workers were 13% more productive. What’s good for your craft is ultimately best for business—a conclusion both engineers and their employers can feel good about. Software plays a big role at just about every level of society—it’s how we create and process information, access goods and services, and entertain ourselves. With the advent of software-defined vehicles, it even determines how we move between physical locations.


Why data literacy matters for business success

Aligning data strategies with overall business strategy and operations is no mean feat. Chief Data Officers (CDOs) are ideal candidates in marrying together data analytics and the wider business, given their appreciation of informed decision making, and the desire to foster a data culture where internal information is properly managed and engaged with throughout the organisation. Moreover, their understanding of the technology landscape will assist when making platform and software selections. This stands to benefit all departments, who’ll gain access to the tools and skills needed to work with data and derive insights. CDOs also embody the “can do” approach to professional development, believing it’s possible to train employees in data-related skills, regardless of their technical proficiency. There’s a well-established correlation between hiring a CDO and business success, with research from Forrester suggesting 89% of organisations harnessing analytics to improve operations that appointed one to oversee the process have seen a positive business impact.


What the 'new automation' means for technology careers

AI is already playing a part in handling technology tasks. A survey released by OpsRamp finds more than 60% of companies adopting AIOps, which applies AI to monitor and improve IT operations themselves. The greatest IT operations challenge for enterprises in 2023 was automating as many operations as possible, cited by 66% of respondents. The main benefits of AIOps seen so far include reduction in open incident tickets (65%); reduction in mean time to detect or restore (56%), and automation of tedious tasks (52%). The latest IT staffing data from Janco Associates finds recent layoffs affected data center and operations staff, with business leaders looking to automate IT processes and reporting. The apparent trend here is that those pursuing careers in technology need to look higher up the stack -- at applications and business consulting. However, there's still a lot of work for people working with the plumbing and code. Unfortunately, getting to automation-driven abstraction -- especially if it involves AI -- requires some manual work up front.


How Cybersecurity Delays Critical Infrastructure Modernization

For critical infrastructure organizations, building a security strategy that works from both an operational technology (OT) and consumer data perspective is not as straightforward as it is in many other industries. Safely storing this data while implementing the latest technology has proved to be a significant challenge across the sector, meaning the service provided by these companies is being hampered. These concerns have prevented a range of technologies from being integrated quickly or at all. These technologies include renewable energy projects, electric vehicle technology, natural disaster contingencies and moving towards smarter grid solutions to replace aging infrastructure. Older operational technology becomes difficult to update and secure sufficiently while the use of third-party software also reduces the level of control organizations have over their data. In addition to this, a lack of automation increases the chances of human error, which could present opportunities to cybercriminals.


What Are Foundation AI Models Exactly?

The generative AI solution can analyze input data against 175 billion parameters and profoundly understand the written language. The smart tool can answer questions, summarize and translate text, produce articles on a given topic, write code, and much more. All you need is to provide ChatGPT with the right prompts. OpenAI’s groundbreaking product is just one example of foundation models that transform AI application development as we know it. Foundation models disrupt AI development as we know it. Instead of training multiple models for separate use cases, you can now leverage a pre-trained AI solution to enhance or fully automate tasks across multiple departments and job functions. With foundation AI models like ChatGPT, companies no longer have to train algorithms from scratch for every task they want to enhance or automate. Instead, you only need to select a foundation model that best fits your use case – and fine-tune its performance for a specific objective you’d like to achieve.


As hiring freezes and layoffs hit, tech teams struggle to do more with less

There are a number of organizational hurdles holding back employees’ learning and development, Pluralsight found. For HR and L&D directors, budget restraints and costs were identified as the biggest barriers to upskilling (30%). This was also true for technology leaders, with 15% blaming financial restraints for getting in the way of employee upskilling. For technology workers themselves, finding time to invest in their own training was identified as the main issue: 42% of workers said they were too busy to upskill, with 18% saying their manager didn’t allow any time during the week to learn new skills. As a result, 21% of tech workers feel pressured to learn outside of work hours. ... However, the report added that giving employees time to invest in their training, address skills gaps and gain valuable growth opportunities are key factors in retention. “Upskilling during work hours will hinder short-term productivity, and managers often bear the brunt of this stress. But don’t sacrifice short-term productivity for long-term success,” the report said.


CISA kicks off ransomware vulnerability pilot to help spot ransomware-exploitable flaws

CISA says it will seek out affected systems using existing services, data sources, technologies, and authorities, including CISA's Cyber Hygiene Vulnerability Scanning. CISA initiated the RVWP by notifying 93 organizations identified as running instances of Microsoft Exchange Service with a vulnerability called "ProxyNotShell," widely exploited by ransomware actors. The agency said this round demonstrated "the effectiveness of this model in enabling timely risk reduction as we further scale the RVWP to additional vulnerabilities and organizations." Eric Goldstein, executive assistant director for cybersecurity at CISA, said, "The RVWP will allow CISA to provide timely and actionable information that will directly reduce the prevalence of damaging ransomware incidents affecting American organizations. We encourage every organization to urgently mitigate vulnerabilities identified by this program and adopt strong security measures consistent with the U.S. government's guidance on StopRansomware.gov."


A Simple Framework for Architectural Decisions

Technology Radar captures techniques, platforms, tools, languages and frameworks, and their level of adoption across an organization. However, this may not cover all the needs. Establishing consistent practices for things that apply across different parts of the system can be helpful. For example, you might want to ensure all logging is done in the same format and with the same information included. Or, if you’re using a REST API, you might want to establish some conventions around how it should be designed and used, like what headers to use or how to name things. Additionally, if you’re using multiple similar technologies, it can be useful to guide when to use each one. Technology Standards define the rules for selecting and using technologies within your company. They ensure consistency, reduce the risk of adopting new technology in a suboptimal way, and drive consistency across the organization.



Quote for the day:

"Leadership is not about titles, positions, or flow charts. It is about one life influencing another." -- John C. Maxwell

Daily Tech Digest - March 20, 2023

The Rise of the BISO in Contemporary Cybersecurity

In general, “A BISO is assigned to provide security leadership for one particular business unit, group, or team within the greater organization,” explains Andrew Hay, COO at Lares Consulting. “Using a BISO divides responsibility in large companies, and we often see the BISOs reporting up to the central CISO for the organization.” “A BISO is responsible for establishing or implementing security policies and strategies within a line of business,” adds Timothy Morris, chief security advisor at Tanium. “Before the BISO role became popular, other director-level roles performed similar functions in larger organizations as an information security leader.” The precise role of the BISO varies from company to company depending on the needs of that company. “In some cases, the BISO will hold a senior position reporting directly to the CISO, CTO, or CIO,” explains Kurt Manske, managing principal for strategy, privacy, and risk at Coalfire. “At this level, the BISO acts as a liaison with business unit leaders and executives to promote a strong information security posture across the organization.”


CEO directives: Top 5 initiatives for IT leaders

Cybersecurity became a bigger issue this year for Josh Hamit, senior VP and CIO at Altra Federal Credit Union, due in part to Russia’s invasion of Ukraine, which touched off warnings about possible Russia-backed hackers stepping up cyberattacks on US targets. As a result, Hamit has brought extra attention to partnering with Altra’s CISO to perfect security fundamentals, cyber hygiene and best practices, and layered defenses. More likely cyber scenarios have IT leaders increasingly concerned as well. For instance, three out of four global businesses expect an email-borne attack will have serious consequences for their organization in the coming year, according to CSO Online’s State of Email Security report. Hybrid work has led to more email (82% of companies report a higher volume of email in 2022) and that has incentivized threat actors to steal data through a proliferation of social engineering attacks, shifting their focus from targeting the enterprise network itself to capitalizing on the vulnerable behaviors of individual employees.


Breach Roundup: Med Devices, Hospitals and a Death Registry

A vulnerability the Indian government at first said did not exist it now says is fixed. The Indian Ministry of Railways in December denied that the data of 30 million people allegedly on sale on the dark net came from a hacker breaching Rail Yatri, the official app of Indian Railways. On Wednesday, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar said the Indian Railway Catering and Tourism Corp. fixed the issue and took necessary precautions to prevent its recurrence. Neither Rail Yatri nor the minister disclosed the penalty paid for the incident. ... A February data breach of the U.S. Marshals Service systems, which led to hackers maliciously encrypting systems and exfiltrating sensitive data law enforcement data, got worse. A threat actor is reportedly selling 350 gigabytes of data allegedly stolen from the servers for $150,000 on a Russian-speaking hacking forum. The data on sale allegedly includes "documents from file servers and work computers from 2021 to February 2023, without flooding like exe files and libraries," reported Bleeping Computer. 


BianLian ransomware group shifts focus to extortion

Researchers observed that the speed at which BianLian posts the masked details has also increased over time. If one is to accept the date of compromise listed by BianLian as accurate, the group averages just ten days from an initial compromise to ratcheting up the pressure on a victim by posting masked details. In some instances, BianLian appears to have posted masked details within 48 hours of a compromise, Redacted said in its report. “With this shift in tactics, a more reliable leak site, and an increase in the speed of leaking victim data, it appears that the previous underlying issues of BianLian’s inability to run the business side of a ransomware campaign appear to have been addressed,” Redacted said, adding that these improvements are likely the result of gaining more experience through their successful compromise of victim organizations. The BianLian group appears to bring close to 30 new command-and-control (C2) servers online each month. In the first half of March, the group has already brought 11 new C2 servers online. The average lifespan of a server is approximately two weeks, Redacted said.


CIOs Must Make Call on AI-Based App Development

Erlihson says other key stakeholders necessary for an AI-based app development strategy include the chief data officer (CDO), who can help manage and govern the organization’s data assets, ensure data quality, and make sure that data is used in compliance with regulations. The chief financial officer (CFO) can ensure that the organization’s investments in AI-based tools are aligned with the financial objectives and overall budget of the company. “It's also important to include business leaders to identify business problems that can be solved by AI, providing use cases, and setting priorities for AI-based app development based on business needs,” he says. Legal and compliance must also be involved to ensure AI-based tools are compliant with data privacy laws and regulations, security, and ethical use of AI. “Finally, operations and IT teams are needed to provide feedback on the feasibility and scalability of AI-based tool development and deployment and to assure that the necessary IT infrastructure required to support AI-based app deployment is in place,” Erlihson says.


How Design Thinking and Improved User Experiences Contribute to Customer Success

Everything is about the needs, preferences and behaviors of users and the frustrations they sometimes face, with a continuous feedback loop used for perpetual reporting. The model emphasizes the need for diverse voices, experimentation with new ways of working, rapid prototyping and iteration, as well as a commitment to constantly improving the quality of service. As an example of experimentation, Airbnb unlocked growth by using professional photography to replace poor-quality images advertising property rentals in New York and saw an instant uptick. Done right, it has the benefit of challenging developer assumptions and management status quo. It helps to mitigate against the narrative of ‘we’ve always done it this way’ or the temptation to ‘bloatware’ which adds pointless features and functions. Despite the name, Design Thinking doesn’t just impact software user experience design; product managers and others are also involved to create a holistic understanding of what is happening. 


Sovereign clouds are becoming a big deal again

Although sovereign clouds aim to increase data privacy and sovereignty, there are concerns that governments could use them to collect and monitor citizens’ data, potentially violating privacy rights. Many companies prefer to use global public cloud providers if they believe that their local sovereign cloud could be compromised by the government. Keep in mind that in many cases, the local governments own the sovereign clouds. Sovereign clouds may be slower to adopt new technologies and services compared to global cloud providers, which could limit their ability to innovate and remain competitive. Consider the current artificial intelligence boom. Sovereign clouds won’t likely be able to offer the same types of services, considering that they don’t have billions to spend on R&D like the larger providers. Organizations that rely on a sovereign cloud may become overly dependent on the government or consortium operating it, limiting their flexibility and autonomy. As multicloud becomes a more popular architecture, I suspect the use of sovereign clouds will become more common. 


Why You Need a Plan for Ongoing Unstructured Data Mobility

Most organizations keep all or most of their data indefinitely, but as data ages, its value changes. Some data becomes “cold” or infrequently accessed or not needed after 30 days yet must be retained for a period of time for regulatory or compliance reasons; some data should be deleted; and some data may be required for research or analytics purposes later. ... Ensuring easy mobility for the data as it ages and understanding the best options for different data segments is paramount. Another reason why unstructured data mobility is imperative is due to growing AI and machine learning adoption. Once data is no longer in active use, it has the potential for a second or third life in big data analytics programs. You might migrate some data to a low-cost cloud tier for archival purposes but IT or other departments with the right permissions should be able to easily discover it later and move it to a cloud data lake or AI tool when needed for many different use cases.
 

Microsoft: 365 Copilot chatbot is the AI-based future of work

Microsoft CEO Satya Nadella said the new 365 Copilot chatbot will “radically transform how computers help us think, plan and act. “Just as we can’t imagine computing today without a keypad, mouse or multitouch, going forward we won’t be able to imagine computing without copilots and natural language prompts that intuitively help us with continuation, summarization, chain-of-thought reasoning, reviewing, modifying and acting,” he said. Copilot combines a large language model (LLM) with the 365 suite and the user data contained therein. Through the use of a chatbot interface and natural language processing, users can ask questions of Copilot and receive human-like responses, summarize online chats, and generate business products. Copilot in Word, for example, can jump-start the creative process by giving a user a first draft to edit and iterate on — saving hours in writing, sourcing, and editing time, Microsoft said in a blog post. "Sometimes Copilot will be right, other times usefully wrong — but it will always put you further ahead,"


How CISOs Can Start Talking About ChatGPT

Do we have the right oversight structures in place? The fundamental challenge with AI is governance. From the highest levels, your company needs to devise a system that manages how AI is studied, developed and used within the enterprise. For example, does the board want to embrace AI swiftly and fully, to explore new products and markets? If so, the board should designate a risk or technology committee of some kind to receive regular reports about how the company is using AI. On the other hand, if the board wants to be cautious with AI and its potential to up-end your business objectives, then perhaps it could make do with reports about AI only as necessary, while an in-house risk committee tinkers with AI’s risks and opportunities. Whatever path you choose, senior management and the board must establish some sort of governance over AI’s use and development. Otherwise employees will proceed on their own – and the risks only proliferate from there.



Quote for the day:

"Humility is a great quality of leadership which derives respect and not just fear or hatred." -- Yousef Munayyer

Daily Tech Digest - March 17, 2023

6 principles for building engaged security governance

No governance strategy can be built without knowing where the organization is currently and where it is going. Start by understanding the organization's core business practices, its product portfolio, customers, geographical footprint, and ethos and culture -- all from a security perspective. This should help answer key security-related questions, such as who does what, why they do it and for whom. Next gain a better understanding of the organizational structure and current security standards, guidelines, regulations and frameworks. Get a better grasp of how security functions operate. Take a comprehensive review of the security policies in place and how effective they are. Understand the current state of security procedures, projects and activities, tests and exercises as well as the current level of information security controls and future roadmap. Assess the skills and capabilities of security practitioners and their responsibilities, and benchmark it with best practices in the industry to expose the gaps in existing capabilities and activities.


Cyber attribution: Vigilance or distraction?

In some situations, effective attribution can be a valuable source of intelligence for organizations that suffered a cyber breach. Threat actors go to great lengths to cover their tracks, and any evidence and facts gathered through attribution can bring organizations closer to catching the perpetrators. Deploying a good Cyber Threat Intelligence (CTI) program helps organizations understand which current or future threats can impact their business operations. Some organizations don’t treat threat intelligence seriously because they already have their “go-to” to blame, or they simply believe no one will attack a small organization. During the Wannacry attack, we witnessed a prime example of a poor interpretation of threat intelligence, when organizations and ISPs started blocking access to a sinkhole URL discovered by security researcher Marcus Hutchins. Rather than being a malicious website, devices connecting to the URL prevented the malware’s payload from activating, so blocking it resulted in further infections.


A Comprehensive Checklist For IoT Project Success

IoT projects often require a variety of specialized knowledge and skills, from edge/gateway device knowledge, to networking, to cloud platform knowledge, to security and real-time dashboard displays. Building a team with the appropriate expertise to manage this technology and other necessary skills helps ensure that the project is designed, developed and implemented successfully. If you do not have team members on hand with the necessary expertise, consider outsourcing some or all of the work to third-party IoT experts. ... IoT systems often need to handle vast amounts of data and potentially millions of devices at once, and planning for scalability ensures that the system’s architecture can handle the expected load (network coverage, reliability, bandwidth, latency, etc.) at the edge and gateway device and cloud platform level (e.g., end user dashboards, alerts, and more). Best practices here would include designing for modularity and flexibility, using scalable technologies, implementing caching and load balancing, and generally planning for growth.


Principles for Adopting Microservices Successfully

While microservices architecture has many benefits, it also introduces new challenges and potential pitfalls. One common pitfall to avoid is creating too many or too few services. Creating too many services can lead to unnecessary complexity while creating too few services can make it difficult to maintain and scale the application. It is important to strike a balance by breaking the application into small, independent services that are focused on specific business functions. Another common pitfall is not considering the operational overhead of microservices. ... It is important to ensure the team has the necessary skills and resources to manage a microservices architecture, including monitoring, debugging, and deploying services. Finally, it is important to avoid creating overly coupled services. Services that are tightly coupled can create dependencies and make it difficult to make changes to the application. It is important to design services with loose coupling in mind, ensuring that each service is independent and can be modified or replaced without affecting other services.


Why Audit Logs Are Important

Audit logs have a different purpose and intended audience when compared to the system logs written by your application’s code. Whereas those logs are usually designed to help developers debug unexpected technical errors, audit logs are primarily a compliance control for monitoring your system’s operation. They’re helpful to regulatory teams, system administrators and security practitioners who need to check that correct processes are being followed. Audit logs also differ in the way they’re stored and retained. They’re usually stored for much longer periods than application logs, which are unlikely to be kept after an issue is solved. Because audit logs are a historical record, they could be retained indefinitely if you have the storage available. ... The information provided by an audit log entry will vary depending on whether the record relates to authentication or authorization. If it’s an authentication attempt, the request will have occurred in the context of a public session. Authorization logs, written by AuthZ services like Cerbos, will include the identity of the logged-in user.


Quantum Computing Is the Future, and Schools Need to Catch Up

Thankfully, things are starting to change. Universities are exposing students sooner to once-feared quantum mechanics courses. Students are also learning through less-traditional means, like YouTube channels or online courses, and seeking out open-source communities to begin their quantum journeys. And it’s about time, as demand is skyrocketing for quantum-savvy scientists, software developers and even business majors to fill a pipeline of scientific talent. We can’t keep waiting six or more years for every one of those students to receive a Ph.D., which is the norm in the field right now. Schools are finally responding to this need. Some universities are offering non-Ph.D. programs in quantum computing, for example. In recent years, Wisconsin and the University of California, Los Angeles, have welcomed inaugural classes of quantum information masters’ degree students into intensive year-long programs. U.C.L.A. ended up bringing in a much larger cohort than the university anticipated, demonstrating student demand. 


Forget the hybrid cloud; it’s time for the confidential cloud

“The use cases are expanding rapidly, particularly at the edge, because as people start doing AI and machine learning processing at the edge for all kinds of reasons [such as autonomous vehicles, surveillance infrastructure management], this activity has remained outside of the security perimeter of the cloud,” said Lavender. The traditional cloud security perimeter is based on the idea of encrypting data-at-rest in storage and as it transits across a network, which makes it difficult to conduct tasks like AI inferencing at the network’s edge. This is because there’s no way to prevent information from being exposed during processing. “As the data there becomes more sensitive — particularly video data, which could have PII information like your face or your driver’s [license] or your car license [plate] number — there’s a whole new level of privacy that intersects with confidential computing that needs to be maintained with these machine learning algorithms doing inferencing,” said Lavender.


DevOps and Hybrid Cloud: A Q+A With Rosalind Radcliffe

The hardest thing to change in any transformation is the culture. The same is true in the IBM CIO office. Both new and experienced developers are learning from each other in a non-penalty environment. Although we have a full hybrid cloud and systems running in lots of places, the reality is I would like to run as much as appropriate on IBM Z from a security and availability or always-on standpoint. I can keep things more available on the platform because of the hardware stability in addition to all the agile capabilities that I can exploit. Meanwhile, I continually chip away at the fears and the misconceptions about development on z/OS. One way to start is by removing that fear and making it simpler and more accessible. Encouraging people to play with z/OS in IBM Cloud with Wazi as a Service, allowing them to experiment, understand and learn in a penalty free environment. They have the freedom to know they are not breaking an existing system or impacting production.


Cyberattackers Continue Assault Against Fortinet Devices

Fortinet described the attack on its customers' devices in some detail in its advisory. The attackers had used the vulnerability to modify the device firmware and add a new firmware file. The attackers gained access to the FortiGate devices via the FortiManager software and modified the devices' start-up script to maintain persistence. The malicious firmware could have allowed for data exfiltration, the reading and writing of files, or given the attacker a remote shell, depending on the command the software received from the command-and-control (C2) server, Fortinet stated. More than a half dozen other files were modified as well. The incident analysis, however, lacked several critical pieces of information, such as how the attackers gained privileged access to the FortiManager software and the date of the attack, among other details. When contacted, the company issued a statement in response to an interview request: "We published a PSIRT advisory (FG-IR-22-369) on March 7 that details recommended next steps regarding CVE-2022-41328," the company said. 


3 ways layoffs will impact IT jobs in 2023

With no oversight, shadow IT services and tools increase risk and vulnerability to attack, or more commonly, poor security hygiene. With mounting to-do lists and more projects than ever, overworked IT teams may default to rubber-stamping access in the name of productivity. But failure to properly govern identities within the organization can lead to a chain reaction of regulatory and budgetary compliance missteps. Automation tools can help ease identity governance worries internally, but IT teams should still be cognizant of what’s being used by employees externally and the business risks they pose. In the case of shadow IT, too many tech tools and services can be seen as the problem. But in many ways, they can also serve as a solution. The right technology can go a long way in alleviating common IT burdens – the key is choosing solutions that work well within a company’s existing technology stack. Advocating for software that is easy for employees to use and integrate will go a long way. 



Quote for the day:

"All leaders ask questions, keep promises, hold themselves accountable and atone for their mistakes." -- James Kouzes and Barry Posner

Daily Tech Digest - March 15, 2023

Critical Thinking: The Overlooked IT Management Skill

Critical thinking is essential to IT leadership because leaders sit at the conjunction of four distinct worlds: data, technology, business processes, and people. “An IT leader has to think logically about how to integrate these four distinct capabilities into a reasonable response to organizational challenges,” says Michael Williams, an associate professor of information systems technology management at Pepperdine University. ... Solutions to complex IT problems are not black and white, observes Sydney Buchel, a senior consultant with cybersecurity and compliance frim BARR Advisory. “IT environments vary, based on complexity, size, the data being processed, unique risks, and integrations with other platforms,” she says. “This means that making key decisions for your IT environment cannot be impulsive -- it requires gathering, analyzing, and conceptualizing information to ensure thoughtful decision-making.” ... Critical thinking requires time and practice. “Collect all relevant information: evidence, facts, research, and perspectives from trusted colleagues, as well as your own thoughts and experiences,” Buchel advises.


Don’t do IT yourself: The trick to ensuring business alignment

There are many ways to implement an ITSC, but to start, you and your committee members must work together continuously on a strategic plan until the planning group can finalize an immediate one-year plan that is approved by all departments. Then each department must take this one-year plan and decide what resources are needed to accomplish their objectives — a process that should include conversations with IT to help determine what is needed. These requirements should then be examined by IT and senior department heads to determine initial time and cost estimates, along with expected ROI, which should be the responsibility of the requesting department. With these plans now submitted to the ITSC, the committee, which remember is composed of all direct reports of the CEO or the COO to ensure all departments are included, then determines whether the systems requested do indeed represent what the company needs and at the speed they are needed. By doing so, the committee can determine whether staffing is sufficient or if additional resources are required to accomplish planned goals.


The importance of software testing for Digital Transformation

IT costs represent significant overheads and budget cuts proliferate. With smaller teams and fewer resources, teams have to do more with less when it comes to software development in order to keep pace with Digital Transformation and customer demand. Speed is a must – but if the customer experience is to be protected, then quality must also be prioritized. Often test automation is a late addition to the Digital Transformation process, but this causes many risks and challenges along the way. As enterprise organizations develop applications that rely on continuous software updates, automating testing is a crucial element to increasing release speeds and improving application quality, helping the organization run more efficiently to meet its bottom line. Test automation gives organizations the ability to monitor and assess risk in real time or even prevent issues before they occur. By adopting this real-time or and a pre-emptive approach, major disruptions which can impact everything from productivity to customer experience or revenue, can be staved off.


Bias Busters: The perils of executive typecasting

A common obstacle to good decision making is executives’ adherence to role theory, a concept in sociology and psychology that suggests that most people categorize themselves and others according to socially defined roles—as a parent, a manager, or a teacher, for instance. They adopt norms associated with designated roles, behave accordingly, and, in a form of groupthink, expect others to do the same. ... Organizations must actively encourage dissent and make it safe for individuals at all levels, regardless of role, to share contrarian ideas. In this case, if the CFO could separate her idea to divest from her status in the organization, she might get a fairer shake from everyone involved. One way to do that would be to engage individuals and teams in a “what you have to believe” assessment, highlighting the discrepancies between the product line’s current performance and the resources needed to bring it back to premier status. Such an assessment could put more facts into and structure around strategy discussions.


Why Asia is moving to multi-cloud

So, what is it that attracts Asia Pacific (APAC) customers to his company’s suite of products? Orchard highlighted two reasons that he consistently hears from APAC firms that adopt his organization’s products: a broad ecosystem, and the ability to bridge the cloud skills gap. “With the breadth of technologies in use by our customers across both traditional data center vendors, public clouds, and SaaS providers, they need a vendor whose focus is on the ecosystem. And with over 2,600 providers for Terraform, we fit that bill better than any other vendor in the industry,” he told DCD. In addition, Orchard says standardization through the HashiCorp Configuration Language (HCL) language used to configure its solutions using code can help address the ongoing skills shortage in cloud professionals. Indeed, HCL as used by Terraform was lauded in GitHub’s latest State of the Octoverse report as the fastest-growing language on GitHub. Though the focus of infrastructure-as-code is on provisioning infrastructure, there are secondary benefits to organizations.


Navigating The AI Minefield: HR Grapples With Bias & Privacy Concerns

To guard against bias and potential liability, employers must provide notice and obtain consent from applicants and employees before using AI tools. This includes explaining what type of tool is being used and providing enough information for individuals to understand the criteria being evaluated. Employers must also be mindful of the way their AI tools can affect people with disabilities and be prepared to provide reasonable accommodations. By taking these steps, employers can harness the power of AI while avoiding bias and promoting diversity and inclusion in the workplace. Artificial intelligence (AI) tools, though designed to help employers make better decisions, could be causing more harm than good due to algorithmic bias. The problem arises when the AI tool is developed and monitored incorrectly, leading to bias against certain demographic groups, according to experts in the field. ... Additionally, AI tools conducting background checks can pull data from a much broader area than traditional selection tools, such as social media, and trigger data privacy concerns as well. 


DNS data shows one in 10 organizations have malware traffic on their networks

More than a quarter of that traffic went to servers belonging to initial access brokers, attackers who sell access into corporate networks to other cybercriminals, the report stated. “As we analyzed malicious DNS traffic of both enterprise and home users, we were able to spot several outbreaks and campaigns in the process, such as the spread of FluBot, an Android-based malware moving from country to country around the world, as well as the prevalence of various cybercriminal groups aimed at enterprises,” Akamai said. “Perhaps the best example is the significant presence of C2 traffic related to initial access brokers (IABs) that breach corporate networks and monetize access by peddling it to others, such as ransomware as a service (RaaS) groups.” Akamai operates a large DNS infrastructure for its global CDN and other cloud and security services and is able to observe up to seven trillion DNS requests per day. Since DNS queries attempt to resolve the IP address of a domain name, Akamai can map requests that originate from corporate networks or home users to known malicious domains


Heart Device Maker Says Hack Affected 1 Million Patients

The incident illustrates how deeply networked connectivity has penetrated the medical device market, a development that has created new opportunities for hackers to steal personal information in an industry historically unaccustomed to fending off threat actors. Information potentially disclosed in the cybersecurity incident includes individuals' names, addresses, birthdates and Social Security numbers. "It may also be inferred that you used or were considered for use of a Zoll product," the company says in a sample breach notification letter. "More and more medical devices are becoming connected to the network and internet and, in almost all cases, the manufacturer is gaining access to device and patient information," said security researcher Jason Sinchak, who leads cybersecurity firm Level Nine's medical device product security practice. "What was previously an embedded medical device manufacturing organization becomes a software-as-a-service and managed service organization," he said.


How machine learning is changing the way businesses think about customer behavior

“It’s crucial to ensure the emotional data captured accurately reflects the inner feelings of the customers rather than just their expressed emotions through specific keywords or loud expressions. Systems based on nonrelevant data are likely to result in a waste of time and money and will probably have lower success rates compared to systems that incorporate genuine emotion detection and personality assessment,” he adds. This new frontier of so-called Emotional Intelligence-as-a-Service has the potential to significantly impact customer behavior by providing valuable insights into their preferences and motivations. By personalizing the customer experience based on this real-time emotional intelligence, organizations can improve customer satisfaction and assist customers in achieving their goals more effectively.‌ ... “In these new virtual environments, just imagine the impact of a virtual agent in a new Web3 or metaverse world that has its own unique personality and style and can truly understand yours,” Liberman says.


The philosopher: A conversation with Grady Booch

The story of computing is the story of humanity. This is a story of ambition, invention, creativity, vision, avarice, and serendipity, all powered by a refusal to accept the limits of our bodies and our minds. As we co-evolve with computing, the best of us and the worst of us is amplified, and along the way, we are challenged as to what it means to be intelligent, to be creative, to be conscious. We are on a journey to build computers in our own image, and that means we have to not only understand the essence of who we are, but we must also consider what makes us different. ... The field of artificial intelligence has seen a number of vibrant springs and dismal winters over the years, but this time it seems different: there are a multitude of economically-interesting use cases that are fueling the field, and so in the coming years we will see these advances weave themselves into our world. Indeed, AI already has: every time we take a photograph, search for a product to buy, interact with some computerized appliance, we are likely using AI in one way or another.



Quote for the day:

"Leaders are the ones who keep faith with the past, keep step with the present, and keep the promise to posterity." -- Harold J. Seymour