Daily Tech Digest - March 25, 2023

The Speed Layer Design Pattern for Analytics

In a modern data architecture, speed layers combine batch and real-time processing methods to handle large and fast-moving data sets. The speed layer fills the gap between traditional data warehouses or lakes and streaming tools. It is designed to handle high-velocity data streams that are generated continuously and require immediate processing within the context of integrated historical data to extract insights and drive real-time decision-making. A “speed layer” is an architectural pattern that combines real-time processing with the contextual and historical data of a data warehouse or lake. A speed layer architecture acts as a bridge between data in motion and data at rest, providing a unified view of both real-time and historical data. ... The speed layer must provide a way to query and analyze real-time data in real time, typically using new breakthroughs in query acceleration such as vectorization. In a vectorized query engine, data is stored in fixed size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements.


7 steps for implementing security automation in your IT architecture

Security automation is often driven by the need to align with various industry regulations, best practices, and guidelines, as well as internal company policies and procedures. Those requirements, combined with constraints on the human resources available to accomplish them, make automation in this space critical to success. ... NIST defines a vulnerability as a "weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source." Vulnerability scanning is the process of leveraging automated tools to uncover potential security issues within a given system, product, application, or network. ... Compliance scanning is the process of leveraging automated tools to uncover misalignment concerning internal and external compliance. The purpose of compliance scanning is to determine and highlight gaps that may exist between legal requirements, industry guidance, and internal policies with the actual implementation of the given entity.


What an IT career will look like in 5 years

“We will see AI usage increase in software development and testing functions shifting the role of these employees” toward higher-level, personal-touch tasks, Huffman says. ... “An augmented workforce experience — across recruiting, productivity, learning, and more — will certainly be something to watch, as the level of trust that we will likely put in our AI colleagues may be surprising,” Bechtel says. “High confidence that AI is delivering the right analytics and insights will be paramount. To build trust, AI algorithms must be visible, auditable, and explainable, and workers must be involved in AI design and output. Organizations are realizing that competitive gains will best be achieved when there is trust in this technology.” Moreover, increased reliance on AI for IT support and development work such as entry-level coding, as well as cloud and system administration will put pressure on IT pros to up their skills in more challenging areas, says Michael Gibbs, CEO and founder of Go Cloud Careers.


Use zero-trust data management to better protect backups

Trust nothing, verify everything. "The principle is to never assume any access request is trustworthy. Never trust, always verify," said Johnny Yu, a research manager at IDC. "Applying [that principle] to data management would mean treating every request to migrate, delete or overwrite data as untrustworthy by default. Applying zero-trust in data management means having practices or technology in place that verify these requests are genuine and authorized before carrying out the request." Data backup software can potentially be accessed by bad actors looking to delete backup data or alter data retention settings. Zero-trust practices use multifactor authentication or role-based access control to help prevent stolen admin credentials or rogue employees from exploiting data backup software. "Zero-trust strategies remove the implicit trust assumptions of castle-and-moat architectures -- meaning that anyone inside the moat is trusted," said Jack Poller, a senior analyst at Enterprise Strategy Group. 


Improving CI/CD Pipelines Through Observability

Overall, observability in a CI pipeline is essential for maintaining the reliability and efficiency of the pipeline and allows developers to quickly identify and resolve any issues that may arise. It can be achieved by using a combination of monitoring, logging, and tracing tools, which can provide real-time visibility into the pipeline and assist with troubleshooting and root cause analysis. In addition to the above, you can also use observability tools such as Application Performance Management (APM) solutions like New Relic or Datadog. APMs provide end-to-end visibility of the entire application and infrastructure, which in turn gives the ability to identify bottlenecks, performance issues, and errors in the pipeline. It is important to note that, observability should be integrated throughout the pipeline, from development to production, to ensure that any issues can be identified and resolved quickly and effectively.


Diffusion models can be contaminated with backdoors, study finds

Chen and his co-authors found that they could easily implant a backdoor in a pre-trained diffusion model with a bit of fine-tuning. With many pre-trained diffusion models available in online ML hubs, putting BadDiffusion to work is both practical and cost-effective. “In some cases, the fine-tuning attack can be successful by training 10 epochs on downstream tasks, which can be accomplished by a single GPU,” said Chen. “The attacker only needs to access a pre-trained model (publicly released checkpoint) and does not need access to the pre-training data.” Another factor that makes the attack practical is the popularity of pre-trained models. To cut costs, many developers prefer to use pre-trained diffusion models instead of training their own from scratch. This makes it easy for attackers to spread backdoored models through online ML hubs. “If the attacker uploads this model to the public, the users won’t be able to tell if a model has backdoors or not by simplifying inspecting their image generation quality,” said Chen.


What is generative AI and its use cases?

Anticipating the AI endgame is an exercise with no end. Imagine a world in which generative technologies link with other nascent innovations, quantum computing, for example. The result is a platform capable of collating and presenting the best collective ideas from human history, plus input from synthetic sources with infinite IQs, in any discipline and for any purpose, in a split second. The results will be presented with recommended action points; but perhaps further down the line the technology will just take care of these while you make a cup of tea. There are several hurdles to leap before this vision becomes reality; for example, dealing with bias and the role of contested opinions, answering the question of whether we really want this, plus, of course, ensuring the safety of humankind, but why not? In the meantime, Rachel Roumeliotis, VP of data and AI at O’Reilly, predicts a host of near-term advantages for logic learning machines (LLMs). “Right now, we are seeing advancement in LLMs outpace how we can use it, as is sometimes the case with medicine, where we find something that works but don’t necessarily know exactly why. 


Iowa to Enact New Data Privacy Law: The Outlook on State and Federal Legislation

The emergence of more data privacy legislation is likely to continue. “It brings the US closer in line with trends we are seeing throughout the world as we have over 160 countries with data protection laws today,” says Dominique Shelton Leipzig, partner, cybersecurity and data privacy at global law firm Mayer Brown. These laws have notable impacts on the companies subject to them and consumers. “For companies, comprehensive privacy laws like these enshrine the existing practices of the privacy profession into law. These laws clarify that our minimum standards for privacy are not just best practices, but legally enforceable by state attorneys general,” says Zweifel-Keegan. While these laws shine a light on data privacy, many critics argue against the “patchwork” approach of state-by-state legislation. “The continuation of the current state-by-state trend means companies are increasingly complying with a complex and evolving patchwork of regulatory requirements. 


Tesla Model 3 Hacked in Less Than 2 Minutes at Pwn2Own Contest

One of the exploits involved executing what is known as a time-of-check-to-time-of-use (TOCTTOU) attack on Tesla's Gateway energy management system. They showed how they could then — among other things — open the front trunk or door of a Tesla Model 3 while the car was in motion. The less than two-minute attack fetched the researchers a new Tesla Model 3 and a cash reward of $100,000. The Tesla vulnerabilities were among a total of 22 zero-day vulnerabilities that researchers from 10 countries uncovered during the first two days of the three-day Pwn2Own contest this week. In the second hack, Synacktiv researchers exploited a heap overflow vulnerability and an out-of-bounds write error in a Bluetooth chipset to break into Tesla's infotainment system and, from there, gain root access to other subsystems. The exploit garnered the researchers an even bigger $250,000 bounty and Pwn2Own's first ever Tier 2 award — a designation the contest organizer reserves for particularly impactful vulnerabilities and exploits.


Leveraging the Power of Digital Twins in Medicine and Business

The digital twins that my team and I develop are high-fidelity, patient-specific virtual models of an individual’s vasculature. This digital representation allows us to use predictive physics-based simulations to assess potential responses to different physiological states or interventions. Clearly, it’s not feasible to try out five different stents in a specific patient surgically. Using a digital twin, however, doctors can test how various interventions would influence that patient and see the outcome before they ever step into the operating room. Patient-specific digital twins allow the doctors to interact with a digital replica of that patient’s coronary anatomy and fine-tune their approach before the intervention itself. The digital twin abstraction allows doctors to assess a wider range of potential scenarios and be more informed in their surgical planning process. Confirming accuracy is a critical component. In validating these models for different use cases, observational data must be measured and used to check the model predictions. 



Quote for the day:

"You don't lead by pointing and telling people some place to go. You lead by going to that place and making a case." -- Ken Kesey

Daily Tech Digest - March 24, 2023

Why CFOs Need to Evaluate and Prioritize Cybersecurity Initiatives

“CFOs should be aware of the increasing risks of cyber threats, including the potential impact on financial performance, reputation, and customer trust,” said Gregory Hatcher, a former U.S. special forces engineer and current founder of cybersecurity consulting firm White Knight Labs. “This includes both external cyber threats and the risk of insider threats posed by disgruntled employees or those with privileged access.” ... “The most commonly overlooked aspects of cybersecurity when transitioning to cloud operation and storage are the cloud provider’s security protocols and compliance requirements,” Hatcher said. He also mentioned the need for employee training on how to securely access and handle cloud data, as well as the potential risks of third-party integrations. Hatcher still recommends executives transfer data sets to the cloud, but with cybersecurity as a large consideration during the process.... “However, it’s essential to choose a reliable cloud provider and ensure compliance with data protection regulations. Keeping data in-house can be risky due to limited resources and potential vulnerabilities.”


Top ways attackers are targeting your endpoints

Vulnerabilities are made possible by bugs, which are errors in source code that cause a program to function unexpectedly, in a way that can be exploited by attackers. By themselves, bugs are not malicious, but they are gateways for threat actors to infiltrate organizations. These allow threat actors to access systems without needing to perform credential harvesting attacks and may open systems to further exploitation. Once they are within a system, they can introduce malware and tools to further access assets and credentials. For attackers, vulnerability exploitation is a process of escalation, whether through privileges on a device or by pivoting from one endpoint to other assets. Every endpoint hardened against exploitation of vulnerabilities is a stumbling block for a threat actor trying to propagate malware in a corporate IT environment. There are routine tasks and maintenance tools that allow organizations to prevent these vulnerabilities getting exploited by attackers.


Serverless WebAssembly for Browser Developers

A serverless function is designed to strip away as much of that “server-ness” as possible. Instead, the developer who writes a serverless function should be able to focus on just one thing: Respond to an HTTP request. There’s no networking, no SSL configuration, and no request thread pool management — all of that is handled by the platform. A serverless function starts up, answers one request and then shuts down. This compact design not only reduces the amount of code we have to write, but it also reduces the operational complexity of running our serverless functions. We don’t have to keep our HTTP or SSL libraries up to date, because we don’t manage those things directly. The platform does. Everything from error handling to upgrades should be — and, in fact, is — easier. ... As enticing as the programming paradigm is, though, the early iterations of serverless functions suffered from several drawbacks. They were slow to start. The experience of packaging a serverless function and deploying it was cumbersome.


How to embrace generative AI in your enterprise

Alongside the positive media coverage, the GPT limitations have been widely documented. This is partly due to their training on vast amounts of unverified internet data. Generative AI tools can potentially provide users with misleading or incorrect information, as well as biased and even harmful content. In fact, the developers of ChatGPT make their users aware of all these limitations on its website. Copyright and legal issues have also been raised. And even the introduction of the GPT-4 version, with more advanced algorithms and larger databases, enabling it to have a much better understanding of nuances and contexts, does not eliminate its flaws, as OpenAI CEO Sam Altman wrote on Twitter. Any enterprise looking to implement generative AI tools needs to have strategies in place to mitigate any limitations. The key to managing these is human supervision and control. Deploying a team of conversational designers/moderators overseeing what knowledge is searched and which GPT capabilities are used, gives control over what information is passed on to users. 


Will Cybersecurity Pros Feel Pressure as Hiring Cools?

“Regardless of the level of demand, though, my approach to hiring is the same,” he says. “I’m usually looking for the right mix of 'security-plus' people.” That means the right mix of core cybersecurity competencies, as well as some other experience in a related technical or compliance field. “It’s not enough to know just security,” he says. “We’re big on cybersecurity pros who aren’t afraid to go broad and get involved in the business aspects of their projects so they can relate to the teams they’ll be working with.” He says he recommend honing technical skills related to zero trust, cloud, automation -- and don’t forget soft skills like communications, project management, and leadership. “In many generalist security roles, people will be expected to cover a lot of ground and focusing on those soft skills can really set a candidate apart,” he says. Mika Aalto, co-founder and CEO at Hoxhunt, notes organizations are still hiring, but there is a lot more talent competing for the same jobs these days. 


Exploring the Exciting World of Generative AI: The Future is Now

Generative AI has the potential to have a huge impact on the economy and society in the coming decade. AI-powered tools can help us automate mundane tasks, freeing up more time for us to focus on more creative tasks. AI can also help us find new ways to solve problems, creating new jobs and opportunities. AI can also be used to create new products and services. AI-powered tools can help us create new products and services that are tailored to the needs of our customers. AI-powered tools can also help us make more informed decisions, allowing us to better understand our customers and their needs. A survey from the World Economic Forum predicted that by 2025, machines will eliminate 85 million jobs while also creating 97 million new employment roles. Shelly Palmer, a professor of advanced media at Syracuse University, says that jobs like middle managers, salespeople, writers and journalists, accountants and bookkeepers, and doctors who specialize in things like drug interactions are “doomed” when it comes to the possibility of AI being incorporated into their jobs.


Q&A: Univ. of Phoenix CIO says chatbots could threaten innovation

"Right now, it’s like a dark art — prompt engineering is closer to sorcery than engineering at this point. There are emerging best practices, but this is a problem anyways in having a lot of [unique] machine learning models out there. For example, we have a machine learning model that’s SMS-text for nurturing our prospects, but we also have a chatbot that’s for nurturing prospects. We’ve had to train both those models separately. "So [there needs to be] not only the prompting but more consistency in training and how you can train around intent consistently. There are going to have to be standards. Otherwise, it’s just going to be too messy. "It’s like having a bunch of children right now. You have to teach each of them the same lesson but at different times, and sometimes they don’t behave all that well. "That’s the other piece of it. That’s what scares me, too. I don’t know that it’s an existential threat yet — you know, like it’s the end of the world, apocalypse, Skynet is here thing. But it is going to really reshape our economy, knowledge work. It’s changing things faster than we can adapt to it."


New UK GDPR Draft Greatly Reduces Business Compliance Requirements

The Data Protection and Digital Information (No. 2) Bill would cut down on the types of records that UK businesses are required to keep. This could reduce the ability of data subjects to view, correct and request deletion of certain information; it would also likely make data breach reports less comprehensive and accurate, as businesses would not be required to keep as close of a watch on what they lost. ICO, the regulator for data breaches and privacy violations, would also be subject to review of its procedures by a new board composed of members the secretary of state appoints. This has raised the question of possible political interference in what is currently an independent body. This particular element could be a sticking point for keeping the UK GDPR equivalent with its EU counterpart for international data transfer purposes, however, as independent regulation has proven to be one of the key points in adequacy decisions. 


How to Navigate Strategic Change with Business Capabilities

Architects in the office of the CIO are often tasked to support senior management with decision-making to get transparency on business and IT transformation. Capability-based planning is a discipline that ensures the alignment of (IT) transformation to business strategy and provides a shared communication instrument aligning strategy, goals and business priorities to investments. Putting capabilities at the center of planning and executing business transformation helps the organization to focus on improving ‘what we do’ rather than jumping directly into the ‘how’ and specific solutions. In this way, capability-based planning helps to ensure we are not just doing things correctly but also focusing on ensuring that we are ‘doing the right things.’ Enterprise architecture practices are important in several stages of implementing capability-based planning. If you’re starting your journey or want to mature your practice, gain more knowledge from our eBook [Lankhorst et al., 2023]. As described in this eBook, our overall process for capability-based planning consists of 10 steps.


IT layoffs: 7 tips to develop resiliency

How did you get to where you are today? What stories have you created for yourself and the world? What skills have you gained? What kind of trust have you earned from people? Who would include you as someone who impacted them? Who had a major influence on your life and career? Many people mistakenly think they are indispensable: If we’re not there, a customer will be disappointed, a product release will be delayed, or a shipment delivery will be late. But the truth is, we are all dispensable. Come to terms with this fact and build your life and career around it. ... We all understand that technology changes rapidly (consider that just a few weeks ago, the world had never heard of ChatGPT). Use this downtime to take online courses on new topics and areas of interest – enroll in an art class, learn a musical instrument, or check out public speaking. There are many opportunities to venture into new areas that will expand your horizons for future work. When you add additional skills to your resume, you expand your thinking and possibilities. 



Quote for the day:

"Life is like a dogsled team. If you ain_t the lead dog, the scenery never changes." -- Lewis Grizzard

Daily Tech Digest - March 23, 2023

10 cloud mistakes that can sink your business

It’s a common misconception that cloud migration always leads to immediate cost savings. “In reality, cloud migration is expensive, and not having a full and complete picture of all costs can sink a business,” warns Aref Matin, CTO at publishing firm John Wiley & Sons. Cloud migration often does lead to cost savings, but careful, detailed planning is essential. Still, as the cloud migration progresses, hidden costs will inevitably appear and multiply. “You must ensure at the start of the project that you have a full, holistic cloud budget,” Matin advises. Cloud costs appear in various forms. Sometimes they’re in plain sight, such as the cost of walking away from an existing data facility. Yet many expenses aren’t so obvious. ... A major challenge facing many larger enterprises is leveraging data spread across disparate systems. “Ensuring that data is accessible and secure across multiple environments, on-premises as well as on applications running in the cloud, is an increasing headache,” says Darlene Williams, CIO of software development firm Rocket Software.


Developed countries lag emerging markets in cybersecurity readiness

The drastic difference in cybersecurity preparedness between developed and developing nations is likely because organizations in emerging markets started adopting digital technology more recently compared to their peers in developed markets. “That means many of these companies do not have legacy systems holding them back, making it relatively easier to deploy and integrate security solutions across their entire IT infrastructure,” the report said, adding that technology debt — the estimated cost or assumed impact of updating systems — continues to be a major driver of the readiness gap. The Cisco Cybersecurity Readiness Index categorizes companies in four stages of readiness — beginner, formative, progressive, and mature. ... Identity management was recognized as the most critical area of concern. Close to three in five respondents, or 58% of organizations, were either in the formative or beginner category for identity management. However, 95% were at least at some stage of deployment with an appropriate ID management application, the report said.


Observability will transform cloud security

Is this different than what you’re doing today for cloud security? Cloud security observability may not change the types or the amount of data you’re monitoring. Observability is about making better sense of that data. It’s much the same with cloud operations observability, which is more common. The monitoring data from the systems under management is mostly the same. What’s changed are the insights that can now be derived from that data, including detecting patterns and predicting future issues based on these patterns, even warning of problems that could emerge a year out. ... Cloud security observability looks at a combination of dozens of data streams for a hundred endpoints and finds patterns that could indicate an attack is likely to occur in the far or near future. If this seems like we are removing humans from the process of making calls based on observed, raw, and quickly calculated data, you’re right. We can respond to tactical security issues, such as a specific server under attack, with indicating alerts, which means it should block the attacking IP address.


Operational Resilience: More than Disaster Recovery

Disaster recovery is fairly narrow in its definition and typically viewed in a small timeframe. Operational resilience is much broader, including aspects like the sort of governance you’ve put in place; how you manage operational risk management; your business continuity plans; and cyber, information, and third-party supplier risk management. In other words, disaster recovery plans are chiefly concerned with recovery. Operational resilience looks at the bigger picture: your entire ecosystem and what can be done to keep your business operational during disruptive events. ... Part of the issue is that cyber is still seen as special. The discussion always seems to conclude with the assumption that the security team or IT department is managing a particular risk, so no one else needs to worry about it. There is a need to demystify cybersecurity. It’s only with the proper business understanding and risk ownership that you can put proper resilience mechanisms in place.


Nvidia builds quantum-classical computing system with Israel’s Quantum Machines

The DGX Quantum deploys Nvidia’s Grace Hopper superchip and its technology platform for hybrid quantum-classical computers coupling so-called graphics processing units (GPUs) and quantum processing units (QPUs) in one system. It is supported by Quantum Machine’s flagship OPX universal quantum control system designed to meet the demanding requirements of quantum control protocols, including precision, timing, complexity, and ultra-low latency, according to the Israeli startup. The combination allows “researchers to build extraordinarily powerful applications that combine quantum computing with state-of-the-art classical computing, enabling calibration, control, quantum error correction and hybrid algorithms,” Nvidia said in a statement. Tech giants like Google, Microsoft, IBM, and Intel are all racing to make quantum computing more accessible and build additional systems, while countries like China, the US, Germany, India, and Japan are also pouring millions into developing their own quantum abilities.


Leveraging Data Governance to Manage Diversity, Equity, and Inclusion (DEI) Data Risk

In organizations with a healthy data culture, the counterpart to compliance is data democratization. Democratization is the ability to make data accessible to the right people at the right time in compliance with all relevant legal, regulatory, and contractual obligations. Leaders delegate responsibility to stewards for driving data culture by democratizing data so that high-quality data is available to the enterprise in a compliant manner. Such democratized data enables frontline action by placing data into the hands of people who are solving business problems. Stewards democratize data by eliminating silos and moving past the inertia that develops around sensitive data sources. An essential aspect of democratization, therefore, is compliance. Stewards will not be able to democratize data without a clear ability to assess and manage risk associated with sensitive data. That said, it is critical that DEI advocates limit democratization of DEI data, especially at the outset of their project or program. 


The Future of Data Science Lies in Automation

Much data science work is done through machine learning (ML). Proper employment of ML can ease the predictive work that is most often the end goal for data science projects, at least in the business world. AutoML has been making the rounds as the next step in data science. Part of machine learning, outside of getting all the data ready for modeling, is picking the correct algorithm and fine-tuning (hyper)parameters. After data accuracy and veracity, the algorithm and parameters have the highest influence on predictive power. Although in many cases there is no perfect solution, there’s plenty of wiggle room for optimization. Additionally, there’s always some theoretical near-optimal solution that can be arrived at mostly through calculation and decision making. Yet, arriving at these theoretical optimizations is exceedingly difficult. In most cases, the decisions will be heuristic and any errors will be removed after experimentation. Even with extensive industry experience and professionalism, there is just too much room for error.


What NetOps Teams Should Know Before Starting Automation Journeys

Like all people, NetOps professionals enjoy the results of a job well done. So, while the vision of their automation journey may be big, it’s important to start with a small, short-term project that can be completed quickly. There are a couple of benefits to this approach:Quick automation wins will give NetOps teams confidence for future projects. Projects like this can generate data and feedback that NetOps teams can convert into learnings and insights for the next project. This approach can also be applied to bigger, more complex automation projects. Instead of taking on the entire scale of the project at once, NetOps teams can break it down into smaller components. ... The advantages of this approach are the same as with the quick-win scenario: There is a better likeliness of success and more immediate feedback and data to guide the NetOps teams through this entire process. Finally, as talented as most NetOps teams are, they are not likely to have all of the automation expertise in-house at any given time. 


Reducing the Cognitive Load Associated with Observability

Data points need to be filtered and transformed in order to generate the proper signals. Nobody wants to be staring at a dashboard or tailing logs 24/7, so we rely on alerting systems. When an alert goes off, it is intended for human intervention, which means transforming the raw signal into an actionable event with contextual data: criticality of the alert, environments, descriptions, notes, links, etc. It must be enough information to direct the attention to the problem, but not too much to drown in noise. Above all else, a page alert should require a human response. What else could justify interrupting an engineer from their flow if the alert is not actionable? When an alert triggers, analysis begins. While we eagerly wait for anomaly detection and automated analysis to fully remove the human factor from this equation, we can use a few tricks to help our brains quickly identify what’s wrong. ... Thresholds are required for alert signals to trigger. When it comes to visualization, people who investigate and detect anomaly need to consider these thresholds too. Is this value in data too low or unexpectedly high?


The Urgent Need for AI in GRC and Security Operations: Are You Ready to Face the Future?

Another area where AI tools are transforming the IT industry is security operations. Businesses face an ever-increasing number of cyberthreats, and it can be challenging to stay ahead of these threats. AI tools can help by automating many security operations, such as threat detection and incident response. They can also help with risk assessment by analyzing large amounts of data and identifying potential vulnerabilities. The benefits of AI tools in the IT industry are clear. By automating processes and improving decision-making, businesses can save time and money while reducing the risk of errors. AI tools can also help businesses to be more agile and responsive to changes in the market. However, the use of AI tools in the IT industry also presents some challenges. One of the key challenges is the need for specialized technical expertise. While AI tools can be user-friendly, businesses still need to have specialized expertise to use the tools effectively.



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - March 21, 2023

CFO Priorities This Year: Rethinking the Finance Function

Marko Horvat, Gartner VP of research, adds CFOs must transition away from optimization and start thinking about transformation. “Making things faster, more accurate, and with less effort has benefits, but each round of improvement brings diminishing returns,” he says. “CFOs must start thinking about ways to transform the function to build and enhance capabilities, such as advanced data and analytics, in order to truly unlock more value from the finance function.” Sehgal says CFOs should be asking questions including, how do we create a futuristic vision for finance? Should short-term gains override longer-term benefits? And how do we fund digital transformation with the current pressures? “CFOs are focused on elevating the role of finance in the organization to be a value integrator across the enterprise, as well as enhancing value through new strategies that not only support development but that also promote innovations for capital allocation,” he explains.


Build Software Supply Chain Trust with a DevSecOps Platform

When building an application, developers, platform operators and security professionals want to monitor vulnerabilities throughout the software supply chain. The challenge comes when multiple vulnerability scanners are used at different stages in the pipeline and different teams are notified and required to take action without proper coordination. A security-focused application platform can build in scan orchestration to not only detect vulnerabilities but also to map those findings to a workload. This feature allows developers to identify issues throughout the life cycle of their applications and help them resolve issues, shifting left the responsibility with a higher degree of automation. Moreover, the platform can build trust with security analysts by showing the performance of application developers and helping them understand the risk that teams are facing. Once a platform detects these vulnerabilities, both at build time and at runtime, it needs to help developers triage and remediate them. 


Developers, unite! Join the fight for code quality

Writing good code is a craft as much as any other, and should be regarded as such. You have every right to advocate for an environment and an operational model that respect the intricacies of what you do and the significance of the outcome. It’s important to value, and feel valued for, what you do. And not just for your own immediate happiness—it’s also a long-term investment in your career. Making things you don’t think are any good tends to wear on the psyche, which doesn’t exactly feed into a more motivated workday. In fact, a study conducted by Oxford University’s Saïd Business School found that happy workers were 13% more productive. What’s good for your craft is ultimately best for business—a conclusion both engineers and their employers can feel good about. Software plays a big role at just about every level of society—it’s how we create and process information, access goods and services, and entertain ourselves. With the advent of software-defined vehicles, it even determines how we move between physical locations.


Why data literacy matters for business success

Aligning data strategies with overall business strategy and operations is no mean feat. Chief Data Officers (CDOs) are ideal candidates in marrying together data analytics and the wider business, given their appreciation of informed decision making, and the desire to foster a data culture where internal information is properly managed and engaged with throughout the organisation. Moreover, their understanding of the technology landscape will assist when making platform and software selections. This stands to benefit all departments, who’ll gain access to the tools and skills needed to work with data and derive insights. CDOs also embody the “can do” approach to professional development, believing it’s possible to train employees in data-related skills, regardless of their technical proficiency. There’s a well-established correlation between hiring a CDO and business success, with research from Forrester suggesting 89% of organisations harnessing analytics to improve operations that appointed one to oversee the process have seen a positive business impact.


What the 'new automation' means for technology careers

AI is already playing a part in handling technology tasks. A survey released by OpsRamp finds more than 60% of companies adopting AIOps, which applies AI to monitor and improve IT operations themselves. The greatest IT operations challenge for enterprises in 2023 was automating as many operations as possible, cited by 66% of respondents. The main benefits of AIOps seen so far include reduction in open incident tickets (65%); reduction in mean time to detect or restore (56%), and automation of tedious tasks (52%). The latest IT staffing data from Janco Associates finds recent layoffs affected data center and operations staff, with business leaders looking to automate IT processes and reporting. The apparent trend here is that those pursuing careers in technology need to look higher up the stack -- at applications and business consulting. However, there's still a lot of work for people working with the plumbing and code. Unfortunately, getting to automation-driven abstraction -- especially if it involves AI -- requires some manual work up front.


How Cybersecurity Delays Critical Infrastructure Modernization

For critical infrastructure organizations, building a security strategy that works from both an operational technology (OT) and consumer data perspective is not as straightforward as it is in many other industries. Safely storing this data while implementing the latest technology has proved to be a significant challenge across the sector, meaning the service provided by these companies is being hampered. These concerns have prevented a range of technologies from being integrated quickly or at all. These technologies include renewable energy projects, electric vehicle technology, natural disaster contingencies and moving towards smarter grid solutions to replace aging infrastructure. Older operational technology becomes difficult to update and secure sufficiently while the use of third-party software also reduces the level of control organizations have over their data. In addition to this, a lack of automation increases the chances of human error, which could present opportunities to cybercriminals.


What Are Foundation AI Models Exactly?

The generative AI solution can analyze input data against 175 billion parameters and profoundly understand the written language. The smart tool can answer questions, summarize and translate text, produce articles on a given topic, write code, and much more. All you need is to provide ChatGPT with the right prompts. OpenAI’s groundbreaking product is just one example of foundation models that transform AI application development as we know it. Foundation models disrupt AI development as we know it. Instead of training multiple models for separate use cases, you can now leverage a pre-trained AI solution to enhance or fully automate tasks across multiple departments and job functions. With foundation AI models like ChatGPT, companies no longer have to train algorithms from scratch for every task they want to enhance or automate. Instead, you only need to select a foundation model that best fits your use case – and fine-tune its performance for a specific objective you’d like to achieve.


As hiring freezes and layoffs hit, tech teams struggle to do more with less

There are a number of organizational hurdles holding back employees’ learning and development, Pluralsight found. For HR and L&D directors, budget restraints and costs were identified as the biggest barriers to upskilling (30%). This was also true for technology leaders, with 15% blaming financial restraints for getting in the way of employee upskilling. For technology workers themselves, finding time to invest in their own training was identified as the main issue: 42% of workers said they were too busy to upskill, with 18% saying their manager didn’t allow any time during the week to learn new skills. As a result, 21% of tech workers feel pressured to learn outside of work hours. ... However, the report added that giving employees time to invest in their training, address skills gaps and gain valuable growth opportunities are key factors in retention. “Upskilling during work hours will hinder short-term productivity, and managers often bear the brunt of this stress. But don’t sacrifice short-term productivity for long-term success,” the report said.


CISA kicks off ransomware vulnerability pilot to help spot ransomware-exploitable flaws

CISA says it will seek out affected systems using existing services, data sources, technologies, and authorities, including CISA's Cyber Hygiene Vulnerability Scanning. CISA initiated the RVWP by notifying 93 organizations identified as running instances of Microsoft Exchange Service with a vulnerability called "ProxyNotShell," widely exploited by ransomware actors. The agency said this round demonstrated "the effectiveness of this model in enabling timely risk reduction as we further scale the RVWP to additional vulnerabilities and organizations." Eric Goldstein, executive assistant director for cybersecurity at CISA, said, "The RVWP will allow CISA to provide timely and actionable information that will directly reduce the prevalence of damaging ransomware incidents affecting American organizations. We encourage every organization to urgently mitigate vulnerabilities identified by this program and adopt strong security measures consistent with the U.S. government's guidance on StopRansomware.gov."


A Simple Framework for Architectural Decisions

Technology Radar captures techniques, platforms, tools, languages and frameworks, and their level of adoption across an organization. However, this may not cover all the needs. Establishing consistent practices for things that apply across different parts of the system can be helpful. For example, you might want to ensure all logging is done in the same format and with the same information included. Or, if you’re using a REST API, you might want to establish some conventions around how it should be designed and used, like what headers to use or how to name things. Additionally, if you’re using multiple similar technologies, it can be useful to guide when to use each one. Technology Standards define the rules for selecting and using technologies within your company. They ensure consistency, reduce the risk of adopting new technology in a suboptimal way, and drive consistency across the organization.



Quote for the day:

"Leadership is not about titles, positions, or flow charts. It is about one life influencing another." -- John C. Maxwell

Daily Tech Digest - March 20, 2023

The Rise of the BISO in Contemporary Cybersecurity

In general, “A BISO is assigned to provide security leadership for one particular business unit, group, or team within the greater organization,” explains Andrew Hay, COO at Lares Consulting. “Using a BISO divides responsibility in large companies, and we often see the BISOs reporting up to the central CISO for the organization.” “A BISO is responsible for establishing or implementing security policies and strategies within a line of business,” adds Timothy Morris, chief security advisor at Tanium. “Before the BISO role became popular, other director-level roles performed similar functions in larger organizations as an information security leader.” The precise role of the BISO varies from company to company depending on the needs of that company. “In some cases, the BISO will hold a senior position reporting directly to the CISO, CTO, or CIO,” explains Kurt Manske, managing principal for strategy, privacy, and risk at Coalfire. “At this level, the BISO acts as a liaison with business unit leaders and executives to promote a strong information security posture across the organization.”


CEO directives: Top 5 initiatives for IT leaders

Cybersecurity became a bigger issue this year for Josh Hamit, senior VP and CIO at Altra Federal Credit Union, due in part to Russia’s invasion of Ukraine, which touched off warnings about possible Russia-backed hackers stepping up cyberattacks on US targets. As a result, Hamit has brought extra attention to partnering with Altra’s CISO to perfect security fundamentals, cyber hygiene and best practices, and layered defenses. More likely cyber scenarios have IT leaders increasingly concerned as well. For instance, three out of four global businesses expect an email-borne attack will have serious consequences for their organization in the coming year, according to CSO Online’s State of Email Security report. Hybrid work has led to more email (82% of companies report a higher volume of email in 2022) and that has incentivized threat actors to steal data through a proliferation of social engineering attacks, shifting their focus from targeting the enterprise network itself to capitalizing on the vulnerable behaviors of individual employees.


Breach Roundup: Med Devices, Hospitals and a Death Registry

A vulnerability the Indian government at first said did not exist it now says is fixed. The Indian Ministry of Railways in December denied that the data of 30 million people allegedly on sale on the dark net came from a hacker breaching Rail Yatri, the official app of Indian Railways. On Wednesday, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar said the Indian Railway Catering and Tourism Corp. fixed the issue and took necessary precautions to prevent its recurrence. Neither Rail Yatri nor the minister disclosed the penalty paid for the incident. ... A February data breach of the U.S. Marshals Service systems, which led to hackers maliciously encrypting systems and exfiltrating sensitive data law enforcement data, got worse. A threat actor is reportedly selling 350 gigabytes of data allegedly stolen from the servers for $150,000 on a Russian-speaking hacking forum. The data on sale allegedly includes "documents from file servers and work computers from 2021 to February 2023, without flooding like exe files and libraries," reported Bleeping Computer. 


BianLian ransomware group shifts focus to extortion

Researchers observed that the speed at which BianLian posts the masked details has also increased over time. If one is to accept the date of compromise listed by BianLian as accurate, the group averages just ten days from an initial compromise to ratcheting up the pressure on a victim by posting masked details. In some instances, BianLian appears to have posted masked details within 48 hours of a compromise, Redacted said in its report. “With this shift in tactics, a more reliable leak site, and an increase in the speed of leaking victim data, it appears that the previous underlying issues of BianLian’s inability to run the business side of a ransomware campaign appear to have been addressed,” Redacted said, adding that these improvements are likely the result of gaining more experience through their successful compromise of victim organizations. The BianLian group appears to bring close to 30 new command-and-control (C2) servers online each month. In the first half of March, the group has already brought 11 new C2 servers online. The average lifespan of a server is approximately two weeks, Redacted said.


CIOs Must Make Call on AI-Based App Development

Erlihson says other key stakeholders necessary for an AI-based app development strategy include the chief data officer (CDO), who can help manage and govern the organization’s data assets, ensure data quality, and make sure that data is used in compliance with regulations. The chief financial officer (CFO) can ensure that the organization’s investments in AI-based tools are aligned with the financial objectives and overall budget of the company. “It's also important to include business leaders to identify business problems that can be solved by AI, providing use cases, and setting priorities for AI-based app development based on business needs,” he says. Legal and compliance must also be involved to ensure AI-based tools are compliant with data privacy laws and regulations, security, and ethical use of AI. “Finally, operations and IT teams are needed to provide feedback on the feasibility and scalability of AI-based tool development and deployment and to assure that the necessary IT infrastructure required to support AI-based app deployment is in place,” Erlihson says.


How Design Thinking and Improved User Experiences Contribute to Customer Success

Everything is about the needs, preferences and behaviors of users and the frustrations they sometimes face, with a continuous feedback loop used for perpetual reporting. The model emphasizes the need for diverse voices, experimentation with new ways of working, rapid prototyping and iteration, as well as a commitment to constantly improving the quality of service. As an example of experimentation, Airbnb unlocked growth by using professional photography to replace poor-quality images advertising property rentals in New York and saw an instant uptick. Done right, it has the benefit of challenging developer assumptions and management status quo. It helps to mitigate against the narrative of ‘we’ve always done it this way’ or the temptation to ‘bloatware’ which adds pointless features and functions. Despite the name, Design Thinking doesn’t just impact software user experience design; product managers and others are also involved to create a holistic understanding of what is happening. 


Sovereign clouds are becoming a big deal again

Although sovereign clouds aim to increase data privacy and sovereignty, there are concerns that governments could use them to collect and monitor citizens’ data, potentially violating privacy rights. Many companies prefer to use global public cloud providers if they believe that their local sovereign cloud could be compromised by the government. Keep in mind that in many cases, the local governments own the sovereign clouds. Sovereign clouds may be slower to adopt new technologies and services compared to global cloud providers, which could limit their ability to innovate and remain competitive. Consider the current artificial intelligence boom. Sovereign clouds won’t likely be able to offer the same types of services, considering that they don’t have billions to spend on R&D like the larger providers. Organizations that rely on a sovereign cloud may become overly dependent on the government or consortium operating it, limiting their flexibility and autonomy. As multicloud becomes a more popular architecture, I suspect the use of sovereign clouds will become more common. 


Why You Need a Plan for Ongoing Unstructured Data Mobility

Most organizations keep all or most of their data indefinitely, but as data ages, its value changes. Some data becomes “cold” or infrequently accessed or not needed after 30 days yet must be retained for a period of time for regulatory or compliance reasons; some data should be deleted; and some data may be required for research or analytics purposes later. ... Ensuring easy mobility for the data as it ages and understanding the best options for different data segments is paramount. Another reason why unstructured data mobility is imperative is due to growing AI and machine learning adoption. Once data is no longer in active use, it has the potential for a second or third life in big data analytics programs. You might migrate some data to a low-cost cloud tier for archival purposes but IT or other departments with the right permissions should be able to easily discover it later and move it to a cloud data lake or AI tool when needed for many different use cases.
 

Microsoft: 365 Copilot chatbot is the AI-based future of work

Microsoft CEO Satya Nadella said the new 365 Copilot chatbot will “radically transform how computers help us think, plan and act. “Just as we can’t imagine computing today without a keypad, mouse or multitouch, going forward we won’t be able to imagine computing without copilots and natural language prompts that intuitively help us with continuation, summarization, chain-of-thought reasoning, reviewing, modifying and acting,” he said. Copilot combines a large language model (LLM) with the 365 suite and the user data contained therein. Through the use of a chatbot interface and natural language processing, users can ask questions of Copilot and receive human-like responses, summarize online chats, and generate business products. Copilot in Word, for example, can jump-start the creative process by giving a user a first draft to edit and iterate on — saving hours in writing, sourcing, and editing time, Microsoft said in a blog post. "Sometimes Copilot will be right, other times usefully wrong — but it will always put you further ahead,"


How CISOs Can Start Talking About ChatGPT

Do we have the right oversight structures in place? The fundamental challenge with AI is governance. From the highest levels, your company needs to devise a system that manages how AI is studied, developed and used within the enterprise. For example, does the board want to embrace AI swiftly and fully, to explore new products and markets? If so, the board should designate a risk or technology committee of some kind to receive regular reports about how the company is using AI. On the other hand, if the board wants to be cautious with AI and its potential to up-end your business objectives, then perhaps it could make do with reports about AI only as necessary, while an in-house risk committee tinkers with AI’s risks and opportunities. Whatever path you choose, senior management and the board must establish some sort of governance over AI’s use and development. Otherwise employees will proceed on their own – and the risks only proliferate from there.



Quote for the day:

"Humility is a great quality of leadership which derives respect and not just fear or hatred." -- Yousef Munayyer

Daily Tech Digest - March 17, 2023

6 principles for building engaged security governance

No governance strategy can be built without knowing where the organization is currently and where it is going. Start by understanding the organization's core business practices, its product portfolio, customers, geographical footprint, and ethos and culture -- all from a security perspective. This should help answer key security-related questions, such as who does what, why they do it and for whom. Next gain a better understanding of the organizational structure and current security standards, guidelines, regulations and frameworks. Get a better grasp of how security functions operate. Take a comprehensive review of the security policies in place and how effective they are. Understand the current state of security procedures, projects and activities, tests and exercises as well as the current level of information security controls and future roadmap. Assess the skills and capabilities of security practitioners and their responsibilities, and benchmark it with best practices in the industry to expose the gaps in existing capabilities and activities.


Cyber attribution: Vigilance or distraction?

In some situations, effective attribution can be a valuable source of intelligence for organizations that suffered a cyber breach. Threat actors go to great lengths to cover their tracks, and any evidence and facts gathered through attribution can bring organizations closer to catching the perpetrators. Deploying a good Cyber Threat Intelligence (CTI) program helps organizations understand which current or future threats can impact their business operations. Some organizations don’t treat threat intelligence seriously because they already have their “go-to” to blame, or they simply believe no one will attack a small organization. During the Wannacry attack, we witnessed a prime example of a poor interpretation of threat intelligence, when organizations and ISPs started blocking access to a sinkhole URL discovered by security researcher Marcus Hutchins. Rather than being a malicious website, devices connecting to the URL prevented the malware’s payload from activating, so blocking it resulted in further infections.


A Comprehensive Checklist For IoT Project Success

IoT projects often require a variety of specialized knowledge and skills, from edge/gateway device knowledge, to networking, to cloud platform knowledge, to security and real-time dashboard displays. Building a team with the appropriate expertise to manage this technology and other necessary skills helps ensure that the project is designed, developed and implemented successfully. If you do not have team members on hand with the necessary expertise, consider outsourcing some or all of the work to third-party IoT experts. ... IoT systems often need to handle vast amounts of data and potentially millions of devices at once, and planning for scalability ensures that the system’s architecture can handle the expected load (network coverage, reliability, bandwidth, latency, etc.) at the edge and gateway device and cloud platform level (e.g., end user dashboards, alerts, and more). Best practices here would include designing for modularity and flexibility, using scalable technologies, implementing caching and load balancing, and generally planning for growth.


Principles for Adopting Microservices Successfully

While microservices architecture has many benefits, it also introduces new challenges and potential pitfalls. One common pitfall to avoid is creating too many or too few services. Creating too many services can lead to unnecessary complexity while creating too few services can make it difficult to maintain and scale the application. It is important to strike a balance by breaking the application into small, independent services that are focused on specific business functions. Another common pitfall is not considering the operational overhead of microservices. ... It is important to ensure the team has the necessary skills and resources to manage a microservices architecture, including monitoring, debugging, and deploying services. Finally, it is important to avoid creating overly coupled services. Services that are tightly coupled can create dependencies and make it difficult to make changes to the application. It is important to design services with loose coupling in mind, ensuring that each service is independent and can be modified or replaced without affecting other services.


Why Audit Logs Are Important

Audit logs have a different purpose and intended audience when compared to the system logs written by your application’s code. Whereas those logs are usually designed to help developers debug unexpected technical errors, audit logs are primarily a compliance control for monitoring your system’s operation. They’re helpful to regulatory teams, system administrators and security practitioners who need to check that correct processes are being followed. Audit logs also differ in the way they’re stored and retained. They’re usually stored for much longer periods than application logs, which are unlikely to be kept after an issue is solved. Because audit logs are a historical record, they could be retained indefinitely if you have the storage available. ... The information provided by an audit log entry will vary depending on whether the record relates to authentication or authorization. If it’s an authentication attempt, the request will have occurred in the context of a public session. Authorization logs, written by AuthZ services like Cerbos, will include the identity of the logged-in user.


Quantum Computing Is the Future, and Schools Need to Catch Up

Thankfully, things are starting to change. Universities are exposing students sooner to once-feared quantum mechanics courses. Students are also learning through less-traditional means, like YouTube channels or online courses, and seeking out open-source communities to begin their quantum journeys. And it’s about time, as demand is skyrocketing for quantum-savvy scientists, software developers and even business majors to fill a pipeline of scientific talent. We can’t keep waiting six or more years for every one of those students to receive a Ph.D., which is the norm in the field right now. Schools are finally responding to this need. Some universities are offering non-Ph.D. programs in quantum computing, for example. In recent years, Wisconsin and the University of California, Los Angeles, have welcomed inaugural classes of quantum information masters’ degree students into intensive year-long programs. U.C.L.A. ended up bringing in a much larger cohort than the university anticipated, demonstrating student demand. 


Forget the hybrid cloud; it’s time for the confidential cloud

“The use cases are expanding rapidly, particularly at the edge, because as people start doing AI and machine learning processing at the edge for all kinds of reasons [such as autonomous vehicles, surveillance infrastructure management], this activity has remained outside of the security perimeter of the cloud,” said Lavender. The traditional cloud security perimeter is based on the idea of encrypting data-at-rest in storage and as it transits across a network, which makes it difficult to conduct tasks like AI inferencing at the network’s edge. This is because there’s no way to prevent information from being exposed during processing. “As the data there becomes more sensitive — particularly video data, which could have PII information like your face or your driver’s [license] or your car license [plate] number — there’s a whole new level of privacy that intersects with confidential computing that needs to be maintained with these machine learning algorithms doing inferencing,” said Lavender.


DevOps and Hybrid Cloud: A Q+A With Rosalind Radcliffe

The hardest thing to change in any transformation is the culture. The same is true in the IBM CIO office. Both new and experienced developers are learning from each other in a non-penalty environment. Although we have a full hybrid cloud and systems running in lots of places, the reality is I would like to run as much as appropriate on IBM Z from a security and availability or always-on standpoint. I can keep things more available on the platform because of the hardware stability in addition to all the agile capabilities that I can exploit. Meanwhile, I continually chip away at the fears and the misconceptions about development on z/OS. One way to start is by removing that fear and making it simpler and more accessible. Encouraging people to play with z/OS in IBM Cloud with Wazi as a Service, allowing them to experiment, understand and learn in a penalty free environment. They have the freedom to know they are not breaking an existing system or impacting production.


Cyberattackers Continue Assault Against Fortinet Devices

Fortinet described the attack on its customers' devices in some detail in its advisory. The attackers had used the vulnerability to modify the device firmware and add a new firmware file. The attackers gained access to the FortiGate devices via the FortiManager software and modified the devices' start-up script to maintain persistence. The malicious firmware could have allowed for data exfiltration, the reading and writing of files, or given the attacker a remote shell, depending on the command the software received from the command-and-control (C2) server, Fortinet stated. More than a half dozen other files were modified as well. The incident analysis, however, lacked several critical pieces of information, such as how the attackers gained privileged access to the FortiManager software and the date of the attack, among other details. When contacted, the company issued a statement in response to an interview request: "We published a PSIRT advisory (FG-IR-22-369) on March 7 that details recommended next steps regarding CVE-2022-41328," the company said. 


3 ways layoffs will impact IT jobs in 2023

With no oversight, shadow IT services and tools increase risk and vulnerability to attack, or more commonly, poor security hygiene. With mounting to-do lists and more projects than ever, overworked IT teams may default to rubber-stamping access in the name of productivity. But failure to properly govern identities within the organization can lead to a chain reaction of regulatory and budgetary compliance missteps. Automation tools can help ease identity governance worries internally, but IT teams should still be cognizant of what’s being used by employees externally and the business risks they pose. In the case of shadow IT, too many tech tools and services can be seen as the problem. But in many ways, they can also serve as a solution. The right technology can go a long way in alleviating common IT burdens – the key is choosing solutions that work well within a company’s existing technology stack. Advocating for software that is easy for employees to use and integrate will go a long way. 



Quote for the day:

"All leaders ask questions, keep promises, hold themselves accountable and atone for their mistakes." -- James Kouzes and Barry Posner