Daily Tech Digest - July 20, 2023

DSPM: Control Your Data to Prevent Issues Later

Simply put, it’s becoming increasingly hard to prevent data security breaches and hacks — the attack surfaces have become too complex. Today, there are petabytes of data being stored, but only a small percentage is actually used and touched on a regular basis. Once the data is stored, it flows seemingly to everyone, and before long, no one knows what data is stored where and who has access to it. Data has become prevalent, especially with the increase in the number of cloud and SaaS applications. All employees, not only engineers, generate and transmit data, sometimes sensitive PII data that is subject to regulations like GDPR and HIPAA. Of course, companies attempt to maintain good data hygiene with risk assessments, labeling, written policies and procedures (that no employee actually reads). All of this is largely done manually and adds more work on IT teams that are already drowning in security and risk assessments as well as security alerts. Add to that the fact that manual assessments are unsustainable and are out of date the second they are completed because they are point-in-time and don’t capture any changes.


Cracking the code: solving for 3 key challenges in generative AI

People are really afraid of machines replacing humans. And their concerns are valid, considering the human-like nature of AI tools and systems like GPT. But machines aren’t going to replace humans. Humans with machines will replace humans without machines. Think of AI as a co-pilot. It’s the user’s responsibility to keep the co-pilot in check and know its powers and limitations. Shankar Arumugavelu, SVP and Global CIO at Verizon, says we should start by educating our teams. He calls it an AI literacy campaign. “We’ve been spending time internally within the company on raising the awareness of what generative AI is, and also drawing a distinction between traditional ML and generative AI. There is a risk if we don’t clarify machine learning, deep learning, and generative AI – plus when you would use one versus the other.” Then the question is: What more can you do if something previously took you two weeks and now it takes you two hours? Some leaders will get super efficient and talk about reducing headcount and the like. Others will think, I’ve got all these people, what can I do with them?


Training AI Models – Just Because It’s ‘Your’ Data Doesn’t Mean You Can Use It

The rise of generative AI has inspired many companies to leverage the data and content they have amassed over the years, to train AI models. It is important that these companies ensure they have the right to use this data and content for this purpose. The lessons from Everalbum are worth heeding. However, the FTC is not the only threat to companies training AI models. Class action attorneys are circling the waters and smell blood. At least one recent class action suit has been filed based on the use of images uploaded by users to train AI models, arguably without the proper consent to do so. ... The foregoing cases primarily address situations where companies used data they already had to train AI models, at least arguably without consent to do so. Many companies are newly collecting data and content from various sources to build databases upon which they can train AI models. In these cases, it is important to ensure that data is properly acquired and that its use to train models is permitted. This too has led to lawsuits and more will likely be filed.


How Platform Engineering Bridges the IT and DevOps Divide

Platforming engineering and “platform as a product” have been key to the PaaS ecosystem for years but are now gaining fresh traction in the industry. In Puppet’s State of DevOps Report, 51% of respondents said they had already adopted platform engineering and 93% said it was a step in the right direction. Gartner predicted 80% of software engineering organizations will have platform teams by 2026. The concept can be defined in several ways. Gartner reported platform engineering is “an emerging trend intended to modernize enterprise software delivery… designed to support the needs of software developers and others by providing common, reusable tools and capabilities, and interfacing to complex infrastructure.” PlatformEngineering.org’s recent blog post defines it as the discipline of designing and building toolchains and workflows for self-service capabilities in software engineering organizations during the cloud-native era. Regardless of definition, platform engineering is the latest iteration of IT centralization, though now attempting to retain all the benefits of distributed team empowerment through “composition” rather than converged control.


Wi-Fi 7: Everything you need to know about the next era of wireless networking

With each iteration of Wi-Fi standards, channel widths have widened to allow for more simultaneous data transfer streams. It's intended to enable multiple devices to communicate, but increasing the channel width doesn't necessarily equate to faster speeds. There are often benefits to sticking with lower channels around 20 - 40MHz, but Wi-Fi 7 jumps to 320MHz for its 6GHz band. Wi-Fi 6E already uses a 6GHz band but is limited to 160MHz, so doubling the channel width is a big selling point for the upcoming standard. As with most technical advancements, real-world performance upgrades will rely on whether your devices are efficiently designed to support the maximum theoretical speeds of Wi-Fi 7.
... MU-MIMO (multi-user, multiple input, multiple output) increases to 16 streams for Wi-Fi 7 alongside the wider channel, doubling the bandwidth from the 8 streams of Wi-Fi 6. The more antennas on your router, internal or external, the better equipped it will be to handle the maximum theoretical bandwidth limits. 


ChatGPT and Digital Trust: Navigating the Future of Information Security

As we navigate this monumental shift, the focus on information security and safeguarding against risks becomes paramount, particularly in the realm of AI. This is where the fascinating and complex issue of digital trust comes into play. Amidst recent news stories of data breaches and privacy concerns, the importance of digital trust and robust information security have never been more critical. ... In the age of AI, maintaining trust in our digital world is an ongoing process that requires constant attention and adaptation. It involves asking tough questions, making complex decisions and collaborating as a tech community. As we continue to integrate AI technologies like ChatGPT into our digital landscape, let’s focus on building a strong foundation of trust that promotes innovation while prioritizing the safety and well-being of everyone involved. As professionals in the technology field, it’s our responsibility to understand, adapt and innovate in a responsible and ethical manner. Let’s keep exploring, questioning and learning because that’s what the journey of technology is all about, especially when it comes to reinforcing information security.


Gartner: Generative AI not yet influencing IT spending, but enterprises should plan for it

“The generative AI frenzy shows no signs of abating,” said Frances Karamouzis, distinguished VP analyst at Gartner, in a statement. “Organizations are scrambling to determine how much cash to pour into generative AI solutions, which products are worth the investment, when to get started and how to mitigate the risks that come with this emerging technology.” That same poll found that 68% of executives believe the benefits of generative AI outweigh the risks, compared with just 5% that feel the risks outweigh the benefits. “Initial enthusiasm for a new technology can give way to more rigorous analysis of risks and implementation challenges,” Karamouzis stated. “Organizations will likely encounter a host of trust, risk, security, privacy and ethical questions as they start to develop and deploy generative AI.” Another survey, this one published by MIT Technology Review Insights and sponsored by enterprise data management company Databricks, polled 600 senior data and technology executives.


IDEA: a Framework for Nurturing a Culture of Continuous Experimentation

Empathy and trust goes a long way when building relationships. If the team is expected to pick up new skills, they need to have dedicated and uninterrupted time to practice and learn. As a team, you can timebox the uninterrupted time you need. However, expecting your team to pick up new skills while they’re also expected to work full-time on their current projects will end up in disappointment and burnout. Another important factor is that people adopt new skills differently. Some people learn better in groups and some alone. I always respect individual preferences. However, having a couple hours of workshops for the whole team often benefits everyone. During these workshops everyone can discuss their learning, questions, and interesting facts they found out. From my experience as a consultant, I often find myself stepping into the unknown with new clients and projects. This has taught me that openness, honesty and curiosity are fundamental. 


Study: We Are Wasting Up to 20 Percent of Our Time on Computer Problems

Surprisingly, studies reveal that a significant amount of our time spent on computers, averaging between 11 and 20 percent, is wasted due to malfunctioning systems or complex interfaces that hinder our ability to accomplish desired tasks. Professor Kasper Hornbæk, one of the researchers involved in the study, deems this situation far from satisfactory. “It’s astonishing how high this percentage is. Almost everyone has experienced the frustration of a critical PowerPoint presentation not being saved or a system crashing at a crucial moment. While it is widely recognized that creating IT systems that align with users’ needs is challenging, the occurrence of such issues should be much lower. This highlights the insufficient involvement of ordinary users during the development of these systems,” Professor Hornbæk asserts. Professor Morten Hertzum, the other researcher contributing to the study, emphasizes that the majority of frustrations stem from the performance of everyday tasks, rather than complex endeavors.


Mitigating the organisational risks of generative AI

Firstly, keeping an eye on how their systems are being used, by rolling up topics, attacks and other exploits to understand the moving threat landscape will be key — along with keeping warning thresholds low for anomalous events. Ensuring all AI-augmented platforms and services have a dedicated ‘kill switch’ with the ability to revoke keys and other methods of access will become ever more vital as we advance to peak GenAI. ... It’s often a great yardstick of how a service, function or platform is performing in the market, so keeping a watch on service and keywords after a big product launch is always a good idea — especially when it comes to picking up any AI responses that break ethics or are reputationally damaging. Providing access to the latest AI-related news on the underlying technologies they’re using for any engineering teams is another preventative measure you can put in place. This will support in the battle to quickly spot any upstream problems, allowing engineers to proactively restrict affected services as required.



Quote for the day:

“If we wait until we’re ready, we’ll be waiting for the rest of our lives.” -- Lemony Snicket

Daily Tech Digest - July 19, 2023

This is why personal encryption is vital to the future of business

We already recognize that humans are the weakest link in any security infrastructure. But what isn’t sufficiently recognized is that any action that puts those humans more at risk makes anyone they work for more vulnerable. A well-resourced attacker will simply identify who works at the company they're aiming for and then find ways to compromise some of those individuals using seemingly unrelated tricks. That compromised data will then feed into more sophisticated attacks against the actual target. So, what makes it easy to create those customized attacks in the first place? Information about those people, what they enjoy, who they know, where they go, and how they flow. That’s precisely the kind of data any weakening in end-to-end encryption for individuals makes easier to get. Because if you weaken personal data protection in one place, you might as well weaken it in every place. And once you do that, you’re presenting hackers and attackers with a totally tempting table of attack surface treats to chow down on. This is not clever, nor is it sensible.


Data protection and AI - accountability and governance

Part of risk remediation will include having policies and procedures in place that ensure operational staff have sufficient direction as to their roles and responsibilities. These should be readily available and supported by training. Risk management policies will need to be implemented or existing policies updated to address AI-specific considerations. For example regarding obtaining and handling AI training and test data, procuring and assessing external software, allocating roles and responsibilities for validation and independent sign-off of AI system development, deployment and updates (which may also include a role for an ethics committee) as well as ensuring policies relevant to automated decision making that address risks of bias, prejudice or lack of interpretability. ... The UK GDPR requires controllers to be transparent with individuals about how their personal data will be collected and processed within AI systems, including by telling them how and why such data will be processed and in explaining any decisions made with AI, how long any personal data will be retained and who it will be shared with. For further information about transparency in AI systems see here.


E-Waste: Australia’s Hidden ESG Nightmare

For Australian enterprises, e-waste is an IT life-cycle challenge, as much as an environmental one. With an increasingly decentralized workforce, IT teams are struggling to keep up with patch maintenance as well as the provisioning and deployment of new devices in such a way that it doesn’t disrupt operations. Consequently, these organizations are prone to create unnecessary e-waste through their poor processes, which can incur several consequences for a business. ... It remains true that managing e-waste at scale can be a logistical challenge for organizations. The best solution would be for IT teams to work with their suppliers and partners to establish a cyclical logistics chain, where older equipment is automatically fed back to the vendor and added to their e-waste management programs using the same logistics that deliver new technology. With the right partners and suppliers, which can offer reliable data-wiping services, the IT team will be able to manage the challenges of e-waste management in Australia. Largely due to these risk factors, the costs of poorly managing e-waste is likely to accelerate rapidly in the months ahead.


The draft data privacy law surprises with its simplicity

For the most part, the draft Digital Personal Data Protection Bill was pretty much what we had been promised—simple, principles-based and generally appropriate for our current stage of maturity. Most businesses I spoke with confirmed that, if passed as is, they would have no problem complying with the obligations it imposed after a reasonably short transition period. To be clear, there were things we would have liked to see changed—clauses that needed to be tweaked and others I would have liked removed. I had an opportunity to engage in the consultations that followed and found the government not just willing to hear our points of view, but keen to understand what impact the text of the draft would have on implementation of the law. In a truly democratic process, it is impossible for everyone’s suggestions to be incorporated, especially when they come from different perspectives. I know that is probably the case for several of my suggestions, but I know that where there exists a multiplicity of views, it is only possible for one to be reflected. 
The question is how an enterprise can use its data to do more than just do cool things? Enterprises are considering how their data can help shareholders. Kobielus wrote TDWI’s Best Practices Report with an eye to determining the chief factors that contribute to data monetization success. He found what he calls “four strategies for data monetization.” “The first one may not, at first glance, sound like a key strategy for monetization of data at all, but it is. It is data democratization -- giving everybody in your organization access to the best data you have to support data-driven analytics,” such as performing queries and producing reports. Enterprises can see the payoff of data democratization in terms of qualitative factors (such as employees working smarter), but there are quantitative factors as well, such as making better business decisions that enable the organization to boost sales, hold on to customers, or upsell to existing customers “When we talk about data monetization, it's a maturity model, where you move from data democratization to operationalize data . 


Managing Human Risk Requires More Than Awareness Training

The first step in managing human risk is to conduct a risk assessment to identify the risk factors most critical to the organization. Sound familiar? To be successful, a risk analyst must assess the likelihood of a vulnerability being exploited and the impact that would occur because of the event. To find these threat sources, the security operations team should be engaged to uncover documentation regarding cyberincidents, threat intelligence and mitigation plans from past audits. The security operations team also tests users on the likelihood of penetration, for example, through phishing simulation exercises. Once an assessor has this information, they can build a risk register to prioritize the highest risk factors. Any educator knows that it is not possible to teach someone everything that they need to know and expect them to retain all the information. ... For example, employees in an organization should be made aware of the risk associated with phishing attacks or identity theft efforts that engage employees through attack vectors such as emails, texts or phone call


A quick intro to the MACH architecture strategy

At the very least, most software teams are likely putting one or more MACH elements to considerable use already. In that case, this evaluation will help reveal which of the four components your organization might be overlooking. For instance, if your organization is currently deploying microservices-based applications on individual servers, deploying those applications in containers across a cluster of servers would be one way to closer align with a MACH strategy. Another plausible scenario is that a software team already uses microservices and cloud-native hosting, but isn't yet managing APIs in a way that positions it at the center of application design plans and build processes. Adopting an API-first development strategy -- that is, one that places a priority on determining how APIs will behave and addresses specific business requirements before any actual coding starts -- would place that team one step closer to proper MACH adoption. However, for teams that are truly starting at square one, such as those still running a localized monolith, it often makes the most sense to start out with headless application design. 


Is PC-as-a-Service part of your hybrid work strategy?

Getting new PCs into the hands of employees and making sure they’re regularly refreshed is complex. The old models of centralized staging and warehousing can create delays and excess shipping costs in today’s hybrid workstyles. Moreover, IT teams struggle to find time to manage day-to-day PC lifecycle tasks. ... By taking this service-oriented approach to PC management, IT teams will spend less time managing and supporting devices, freeing up time to focus on projects that have a greater impact on the business. From a financial perspective, Dell APEX PCaaS flips the script of employee device purchasing from a fixed cost to a predictable, monthly expenditure. Payments that spread out over time—like leasing a car or subscribing to cable services—align with your experience of consuming cloud software while affording you flexibility in how you plan your budget and allocate people resources. With Dell APEX PCaaS you can help your overworked IT staff deploy, support, and manage PCs, reducing time to value and total cost of ownership while ensuring that employees remain productive.


Why and how CISOs should work with lawyers to address regulatory burdens

As the regulatory burden increases, organizations and CISOs are having to take ownership of cyber risk, but it needs to be seen through the lens of business risk, according to Kayne McGladrey, field CISO with Hyperproof. Cyber risk is no longer simply a technology risk. "The problem is, organizationally, companies have separated those two and have their business risk register and their cyber risk register, but that’s not the way the world works anymore," says McGladrey. He believes the Securities and Exchange Commission (SEC), the Federal Trade Commission, FTC and other regulators in the US are trying to promote collaboration among business leaders because cyber risks are functionally business risks. ... However, not all CISOs are naturally well versed in defining the business case of cyber risk, and McGladrey believes CISOs who are more adept at articulating the business value of doing cybersecurity will find it easier to achieve buy-in, while those with a more technical background that emphasize compliance over business risk may find it more difficult to get support and budget.


Stress Test: IT Leaders Strained by Talent Shortage, Tech Spend

George Jones, CISO at Critical Start, says a shortage of skilled professionals has led to delays in certain projects and increased workloads for existing team members. “To combat these delays, we have looked at upskilling current employees, brought in interns with specific skill sets, leveraged contract and freelance workers, and implemented knowledge-sharing to encourage cross-functional collaboration, empowering employees to learn from one another,” he says. He explains Critical Start employees have clearly defined roles and responsibilities that align with their team and organizational goals, and cross-functional collaboration is encouraged to leverage diverse perspectives and expertise. “Agile methodologies promote transparency, adaptability, and iterative progress and foster a culture of psychological safety where individuals feel comfortable sharing ideas, taking risks, and learning from failures,” he adds. Jones says to foster a culture of communication and collaboration, my teams meet regularly to share knowledge, project updates, and provide feedback on what is working and what isn’t.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - July 18, 2023

Embrace AI Acceleration by Investing in Reliability

There’s no way to completely eliminate the unreliability risks of AI without also eliminating all of its benefits. Manually reworking every line of code the AI writes to be “robustly human-compatible,” for example, makes it not much faster than writing code yourself. Instead, let AI accelerate you where it can, and empower the people steering it to mitigate the risk. A major advantage of engineers over current AI models is perspective. Your AI copilot is lightning-fast at producing and testing code, but it doesn’t understand why you’re asking for these tasks. Unfortunately, human engineers can also end up stuck regurgitating code from requests, not knowing the big picture or having any impact on it. When they become “managers” of AI, it’s more important than ever to empower your engineers with this perspective. ... Even without needing to understand the details of the AI-written code, each engineer can tackle things on a higher level, mitigating the effect of the problem on the intended outcome of the service. They’ll know what your users care about and how to leverage AI to quickly bring back functionality. 


CISOs under pressure: Protecting sensitive information in the age of high employee turnover

A risk assessment can quickly identify and prioritize cyber vulnerabilities so that you can immediately deploy solutions to protect critical assets from malicious cyber actors while immediately improving overall operational cybersecurity. This includes protecting and backing up business enterprise systems such as: financial systems, email exchange servers, HR, and procurement systems with new security tools and policies. There are measures in a vulnerability framework that are not cost prohibitive. Those measures can include mandating strong passwords for employees and requiring multi-factor authentication. Firewalls can be set up and CISOs can make plans to segment their most sensitive data. Encryption software can also be affordable. The use of the cloud and hybrid clouds enables implementation of dynamic policies, faster encryption, drives down costs, and provides more transparency for access control. A good cloud provider can provide some of those security controls for a reasonable cost. 


A Tutorial About Dealing With an Obfuscated Code

Security researchers face numerous challenges in their work, and malware writers consistently attempt to compound these existing challenges with additional obstacles. Therefore, when researchers examine a script, code, or file, it often exhibits lengthy and vague variable names, occasionally encrypted using methods like Base64 or subjected to XOR operations. The code may contain unnecessary data, including thousands of lines of code that are never utilized, among other elements deliberately intended to perplex and consume the valuable time of the researcher. ... It’s worth considering that deobfuscation techniques can vary, offering different approaches and potential solutions. As a result, you may come across alternative methods that resonate better with your preferences and prove to be more effective for your specific needs. ... Occasionally, you may encounter files without proper indentation. In such cases, you can search online for “VBS beautify” tools to assist in organizing and formatting the code. So, this is what we are dealing with today, there’s no need to overanalyze it at this point, we will soon tackle it together:


Police Scotland use cloud for biometric data despite clear risks

Computer Weekly contacted Police Scotland about various aspects of the story, including why DNA and fingerprint data was deemed too sensitive for the system, but other biometric information was not; why it considers encryption to be an effective safeguard in this instance; and why it decided to press forward with the DESC pilot despite major data protection concerns being highlighted by both the SPA and ICO. “Police Scotland continues to work closely with all relevant partners to identify, assess and mitigate any risks relating to data sovereignty, where required. Further risk assessments and mitigation will be kept under ongoing scrutiny,” said a Police Scotland spokesperson. “All digital evidence on the DESC system is held securely. Access to the information is fully audited and monitored, and only accessible to approved personnel. ... “We take the management and security of data seriously. We are working with our criminal justice partners to ensure robust, effective and secure processes are in place to support the development of the system and will continue to engage with the biometrics commissioner, the Information Commissioner’s Office and relevant partners.”


Using Snapshots to Improve Data Security

Snapshots can augment backups for data protection. For those wishing to reduce their recovery point objective without spending a fortune, snapshots are one option. Backups can recover data anywhere from a day ago to a week or more, depending on when the last backup was done. Anything later than the last backup is lost. Snapshots can take the RPO down to an hour or so, depending on how often they are done. Some businesses run snapshots more often than once an hour due to the sensitive or financially lucrative nature of the data they process. ... One way to achieve immutability is to send data to a tape archive that remains offline. That air gap means that cybercriminals can’t cause any mischief as there is no direct networking connection to the data. But there are other solutions to immutability — some better than others. Some try to pass off cloud storage as being immutable. In reality, it is just cloud storage with extra layers of protection. Pure Storage is one vendor that has put together some immutability features that make snapshots more valuable. 


Unlocking the Full Hybrid Cloud Potential With Modern Data Management

Protecting data along its journey to the cloud requires complete visibility. Legacy systems often create data siloes, making it difficult to see what’s happening in a given corner of the business. When modernizing, companies should prioritize solutions that allow for siloes to be eliminated. This ultimately offers decision-makers a picture of their data across the entirety of the enterprise. Furthermore, due to the sheer volume of data in the hands of today's typical business, software solutions that bring agility and flexibility to data management are also a must. Hybrid migrations can facilitate frictionless modernization. However, continuous, successful transformation hinges on ensuring the business is equipped with the right tools in its technology stack to drive this objective. This emphasizes the point that for hybrid cloud strategies to result in successful modernization, deep visibility and strong controls on data in transit is crucial. ... With established, effective data practices, organizations can more freely interact with their valuable and critical data without incurring risk.


A Disturbing Trend in Ransomware Attacks: Legitimate Software Abuse

Leveraging legitimate software can allow attackers’ activity to remain hidden, which may allow them to achieve their goals on a victim network without being discovered. Legitimate software misuse also can make attribution of an attack more difficult, and these tools can also lower barriers to entry. This means less-skilled hackers may still be able to conduct quite wide-ranging and disruptive attacks. The legitimate tools we most commonly see being used by malicious actors are remote monitoring and management (RMM) tools, such as AnyDesk, Atera, TeamViewer, ConnectWise, and more. In fact, the use of RMM software by malicious actors was considered serious enough for the Cybersecurity and Infrastructure Security Agency (CISA) to issue an alert about this kind of. As recently as February this year, the Symantec Threat Hunter team saw ConnectWise used in both Noberus and Royal ransomware attacks. These tools are commonly used legitimately by IT departments in small, midsize, and large organizations.


Only half of organizations “very prepared” to meet global data privacy laws

The survey suggests that those who feel they are very prepared to meet data privacy laws may not be as ready as they believe. While 70% say they have designated an internal project manager or owner and 58% conduct regular training of staff on data privacy and compliance, less than half of the overall respondent pool have taken the following steps: engaged outside legal counsel (42%), participated in a peer group to keep abreast of changes (40%), or developed a task force/oversight counsel to track privacy law changes (35%), the research found. ... "Data mapping - knowing what data you have and where it lives - is foundational for any effective data privacy and cybersecurity strategy," wrote Tara Cho, partner, chair of the Womble Bond Dickinson privacy and cybersecurity team, and report contributor. While many companies might implement external-facing actions, such as putting a cookie banner on their website or updating privacy policies, there is still a need to build out back-end requirements to truly operationalize the compliance requirements, Cho added.


Is quantum computing the next frontier for machine learning experts?

“We need more quantum literate programmers and engineers; but equally as important, I encourage quantum literacy across a wide range of diverse roles. For example, we need quantum literate scientific journalists, policy makers, ethicists, teachers, cyber analysts and strategists,” says Dr Kristin M. Gilkes, global innovation quantum leader at EY.“Quantum is a domain for which we need all kinds of diverse thinking and leadership, not just the physicists, programmers and engineers.” ... “Quantum is picking up pace and given the advances we are seeing using a hybrid ML/quantum process, I think we are going to see serious benefits in the next two to three years,” Dr Gilkes adds. “We are finding a symbiotic relationship between the disciplines of AI and quantum, each bringing their own value to the table and making the other more efficient and faster. ML has the ability, today, to organise and manipulate large data sets really well, which is a function that quantum computing can benefit from.” Similar to how AI is surpassing all scaling timeframe predictions, Dr Gilkes believes that the rapid advancement of quantum computing means its impact will be felt in the next couple of years. 


How Intelligent Applications Can Boost Sales

One way an intelligent app can increase sales is by creating a personalized user experience. “This focuses on offering potential customers products or services that are applicable to them specifically, based on data obtained from prior user interactions, past searches, or surveys,” says Danielle Borisovsky, a manager in intelligent automation technologies at automation firm Reveal Group. Lead prioritization is another way intelligent applications can help spur sales. Ranking leads based on potential value and conversion probability allows sales teams to focus on the most promising prospects, Ours says. “Elements helping to prioritize leads can range from prior history, strength of relationship, size of the deal, customer monetization value, or even the maturity of your product or offering.” Perhaps the most popular -- and valuable -- intelligent application sales tool is forecasting. “By analyzing historical sales data and various market factors, AI-powered sales applications can generate more accurate forecasts, driving better decision-making, upselling, and cross-selling,” Ours says.



Quote for the day:

"The first task of a leader is to keep hope alive." -- Joe Batten

Daily Tech Digest - July 17, 2023

EU urged to prepare for quantum cyberattacks with coordinated action plan

The narrow focus at the EU level on how to mitigate short-term quantum cybersecurity challenges, especially harvest attacks and quantum attacks on encryption, leaves member states as the frontline actors in the quantum transition, Rodr?guez said. "As of 2023, only a few EU countries have made public plans to counter emerging quantum cybersecurity threats, and fewer have put in place strategies to mitigate them, as in the case of Germany." As quantum computers develop, European action will be needed to prevent cybersecurity loopholes that can be used as attack vectors and ensure that all member states are equally resilient to quantum cyberattacks. "A Coordinated Action Plan on the quantum transition is urgently needed that outlines clear goals and timeframes and monitors the implementation of national migration plans to postquantum encryption," Rodr?guez claimed. Such a plan would bridge the gap between the far-looking objective of establishing a fully operational European Quantum Communication Infrastructure (EuroQCI) network and the current needs of the European cybersecurity landscape to respond to short-term quantum cybersecurity threats.


What the CIO role will look like in 2026

“The CIO role in 2026 will be about influencing, leading, and governing, as opposed to technology selector, integrator, configurator, and customizer. And CIOs who are not on top of this before 2026 will find themselves having to catch-up,” says Joseph Bruhin, CIO of Breakthru Beverage Group. In other words, CIOs three years out will be even farther away from the technical chief of yesteryear and closer to corporate strategist. “With every company being digital, CIOs will take on the role of the architect of the company, not just the architect of digital,” says Vipin Gupta, former chief information, strategy and digital officer at Toyota Financial Services International and the 2021 MIT Sloan CIO Leadership Award Winner. IT leaders describe the CIO of 2026 and beyond as an “influencer,” “strategic thinker,” and “eloquence communicator and leader.” They say the CIO will need to be flexible, innovative, and nimble. And they stress the need for CIOs to be even more visionary than they are today, because they’ll have a lead role in shaping the organization’s future, not just support it.


Breach Roundup: IT Worker Sentenced for Impersonation

Assigned to the investigation, Liles, an IT staff member at Oxford Biomedica, decided to manipulate the situation for personal gain. Instead of directing the ransom payment to the genuine hackers, he secretly altered the original ransom demand. Using the email account of an Oxford Biomedica board member, Liles redirected the funds to a bitcoin wallet under his control. Consequently, if the company chose to pay the ransom, the money would end up in Liles' hands rather than with the actual attackers. Liles also created an email address strikingly similar to that of the original hacker and began pressuring the employer to pay a 300,000-pound ransom. Specialists from the South East Regional Organized Crime Unit's Cyber Crime Unit became suspicious during their investigation. They identified unauthorized access to the board member's email and traced it back to Liles' home address. The charges brought against Liles included blackmail and unauthorized access to a computer with intent to commit other offenses. The court's decision is a reminder of the severe consequences that individuals who exploit their positions for personal gain may face.


Quantum Leaps: Interest and Investment in Quantum Computing

The era of quantum computing has only just begun. The pace of innovation in this nascent, emerging space is simply remarkable, experts say, especially as companies and governments around the world increase both their interest and investment in the technology. While the people working in QC (quantum computing) believe it will transform the future of computing, no one knows for sure exactly how or when, because there is simply not enough known about what today’s quantum computers can actually do. And despite its promise, quantum currently has limited applications, and only a handful of these applications are moving past research into real-life scenarios. However, with all the investment and startup activity in the quantum space, it’s safe to assume that it will reshape computing, and it may do so sooner than expected. Alan Baratz, CEO of D-Wave, points to a study from Hyperion Research, which found that more than 80% of responding companies plan to increase quantum commitments in the next 2-3 years, and one-third of those companies say they will spend more than $15 million annually on quantum computing efforts.


The biggest barrier to AI productivity is people

Most people already struggle to find the information they need, which is what led to Google’s massive search business. Within the enterprise, Roth says, roughly one-third of respondents to the 2022 Gartner Digital Worker Survey reported that they frequently struggle to find the information they need to do their jobs well. Perhaps worse, 22% have missed important updates because of the sheer volume of applications and information thrown at them. This is the state of workers in the pre-GenAI world. “Now throw in more content being produced at a quicker pace,” Roth says, “Emails that used to be short and to the point may now be inflated to full, polite corporatespeak by the AI.” A bad problem becomes dramatically worse as more people create more content of middling quality, trusting the AI to get the facts correct. And it often won’t; things like ChatGPT aren’t interested in truth—that’s not what they’re for or how they’re engineered. The solution to this machine-generated problem is to reinsert people into the mix. People are still needed to fact-check and do quality control. 


Unconventional Recruiting Methods That Can Help Fill The Tech Talent Gap

Partnering with local schools and nonprofit organizations can help build talent pipelines. Providing learning opportunities for students of all ages—from elementary school through college—by exposing them to various technology disciplines can generate interest and encourage them to consider professions in the field. Teaching and mentoring the next generation are crucial for employers who want to grow future talent pools organically. Speaking at schools and nonprofit organizations allows you to meet and handpick potential employees rather than simply waiting for responses to job postings. ... Another solution for expanding talent pools is creating entry-level “bench” or “evergreen” positions that allow individuals to expand their strengths and work experience by rotating through different IT disciplines. The positions are general and designed to get talented individuals into an organization with the idea that they’ll move into more permanent roles as the right fits become available.


Panic about overhyped AI risk could lead to the wrong kind of regulation

The demand for AI stories has created a perfect storm for misinformation, as self-styled experts peddle exaggerations and fabrications that perpetuate sloppy thinking and flawed metaphors. Tabloid-style reporting on AI only serves to fan the flames of hysteria further. These types of common exaggerations ultimately detract from effective policymaking aimed at addressing both immediate risks and potential catastrophic threats posed by certain AI technologies. For instance, one of us was able to trick ChatGPT into giving precise instructions on how to build explosives made out of fertilizer and diesel fuel, as well as how to adapt that combination into a dirty bomb using radiological materials. If machine learning were merely an academic curiosity, we could shrug this off. But as its potential applications extend into government, education, medicine, and national defense, it’s vital that we all push back against hype-driven narratives and put our weight behind sober scrutiny.


Want to make cybersecurity much stronger? Become a mentor

Those who have been around the world of cybersecurity for a while have long realized the importance of the chief information security officer's (CISO) role in leading teams charged with maintaining the security of corporate data and much, much more. But both freshly minted and veteran CISOs can sometimes feel they're stranded on a desert island for several reasons. They may be new to the role and acclimating to the responsibility and, of course, the accountability they are now shouldering. Others may find themselves having to rapidly garner knowledge and perspective when a situation about which they lack familiarity lands on their plate. This is where mentors and mentorship can be invaluable. So, I set out to determine what that looks like today and how accessible CISOs are to one another. ... "Mentorship in the cybersecurity field is an invaluable tool in both an individual's and an organization's maturity. CISOs who have been through the wringer have considerable wisdom to share about everything from ransomware remediation to dealing with recalcitrant CFOs," shared Craig Burland, CISO of Inversion6. 


Tales from Production: How Real-World Coders Are Using AI

Some programmers on Hacker News were using AI tools for debugging — and even “rubber duck” debugging, where describing a code’s function (and its bugs) sometimes produces crucial insights into problems. “I’ve found rubber duck debugging to be an exceptionally effective use case for ChatGPT,” one developer posted. “Often it will surprise me by pinpointing the solution outright, but I’ll always be making progress by clarifying my own thinking.” But just how good is AI at debugging its own code? One commenter complained that at the end of the day, “Sometimes it’d give completely wrong answers. It’s just not code I’d commit or let pass a code review.” Another doubted AI’s ability to fix those bugs. “They can approximate the syntax of things in their training corpus, but logic? The lights are off and nobody’s home.” But another commenter believes in AI’s potential. “I’ve already had the GPT3.5-Turbo model walkthrough and step-by-step isolate and diagnose errors. They 100% can troubleshoot and correct issues in the code..." 


DevOps and Cloud InfoQ Trends Report – July 2023

In the accompanying cloud and DevOps trends podcast discussion, the participants address the state of cloud innovation and DevOps. They agree that cloud innovation has slowed down, moving from "revolution" to "evolution". While large numbers of organizations have adopted cloud technologies, there are many enterprises that want to migrate and re-architect workloads. As for DevOps, it is still alive but has reached a stage of stagnancy in some organizations. The concept of DevOps, which aims to provide access and autonomy to create business value, is still alive, but the implementation has faced challenges. The panelists mentioned their interest in Value Stream management to unlock DevOps’s flow and value realization. The public cloud vendors have evolved from their original goal of providing on-demand access to scalable resources to focus more on offering managed services. This evolution has made cloud computing more ubiquitous. However, technology is changing rapidly around existing services, new business requirements are being discovered, and new challenges are emerging.



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome. " -- John Donahoe

Daily Tech Digest - July 16, 2023

The engines of AI: Machine learning algorithms explained

Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined. The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge. Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search.


How to Build a Cyber-Resilient Company From Day One

Despite your best proactive measures, some cyber threats will infiltrate your defenses. Reactive defenses, such as firewalls and antivirus software, help to minimize damage when these incidents occur. Firewalls monitor and control incoming and outgoing network traffic based on predetermined security rules, forming the first line of defense against cyber threats. Antivirus software complements firewalls by detecting, preventing and removing malicious software. Intrusion Detection and Prevention Systems (IDS/IPS) monitor your network for suspicious activities and potential threats, alerting you to a potential attack and, in some cases, taking action to mitigate the threat. Encryption is another valuable reactive measure that involves making your sensitive data unreadable to anyone without the appropriate decryption key, thus protecting it even if it falls into the wrong hands. Security Information and Event Management (SIEM) systems provide real-time analysis and reporting of security alerts generated by applications and network hardware. They help detect incidents early and respond promptly.


Quantum Algorithms vs. Quantum-Inspired Algorithms

Quantum-inspired algorithms refer usually to either of the two: (i) classical algorithms based on linear algebra methods — often methods known as tensor networks — that were developed in the recent past, or (ii) methods that attempt to use a classical computer to simulate the behavior of a quantum computer, thus making the classical machine operate algorithms that benefit from the laws of quantum mechanics that benefit real quantum computers. On (i), while the physics community has leveraged these methods to address problems in quantum mechanics since the 70s [Penrose], tensor networks have an independent origin as far back as the 80s in neuroscience as well, as there is nothing really quantum behind them; it really is just linear algebra. For (ii), the process of emulating a quantum system falls back on the limitations of classical hardware. It is very hard to emulate classically the full dynamics of a large quantum system for the exact same reasons that one wants to actually build a real one! So, does this mean that quantum-inspired algorithms are bogus? Not really. 


Operator survey: 5G services require network automation

"Private 5G" and "network slicing" rank second and third, respectively. Heavy Reading expects their importance and popularity to increase as additional operators deploy 5G SA and can support full autonomy. "Performance SLAs for enterprise services" is currently the lowest ranking (fifth) of all service choices but is likely to be a valuable market, especially for network slicing and private 5G. "Connected devices (e.g., cars, watches, other IoT devices)" ranks just above performance SLAs in fourth. Internet of Things (IoT) is a sizeable market within 4G, but the massive machine-type communications (mMTC) use case has yet to be realized in 5G, as technologies such as RedCap remain underdeveloped. Smaller operators have a different opinion from larger operators on the revenue growth question. For mobile operators with less than 9 million subscribers, private 5G ranks first. This result perhaps indicates that smaller operators feel they are already exploiting eMBB services and see little scope for further revenue growth with SA.


Top 5 Features your ITSM Solution Should Have

Addressing the root causes of recurring incidents and preventing them from happening again is the core of what a problem management module is designed for. Robust problem management functionality helps investigate, analyze, and identify underlying causes, leading to effective problem resolution. A reliable ITSM solution should include features such as root cause analysis, trend identification, and proactive problem identification. This should provide a structured approach to change requests, reduce the impact of incidents, and improve the overall stability of your IT environment. A comprehensive knowledge management system is a necessary asset for any IT service desk. It serves as a centralized repository of information, providing users with self-help resources, troubleshooting guides, and best practices from within the organization. A well-organized and searchable knowledge base allows users to access relevant articles and documentation for independent issue resolution. Knowledge bases reduce reliance on IT support and enable faster problem resolution. When choosing an ITSM solution with a knowledge base, look for user-friendly interfaces, easy personalization, and collaborative features.


No cyber resilience without open source sustainability

Open source sustainability is a problem: maintainers of popular software projects are often overwhelmed by issues and pull requests to the point of burnout. Donations have emerged as one solution, and are regularly provided by governments, foundations, companies, and individuals. Yet, as excerpts of recent drafts of the CRA indicate, it could threaten to undermine sustainability by potentially introducing a burdensome compliance regime and potential penalties if a maintainer decides to accept donations. The result will be less resources flowing to already under resourced maintainers. Open source projects are often multi-stakeholder: they receive contributions from developers building as individuals, volunteering in foundations, and working for companies, large and small. The current text would regulate open source projects unless they have “a fully decentralised development model.” Any project where a corporate employee has commit rights would need to comply with CRA obligations. This turns the win-wins of open source on its head. Projects may ban maintainers or even contributors from companies, and companies may ban their employees from contributing to open source at all. 


Building Trust in a Trustless World: Decentralized Applications Unveiled

In a DApp, smart contracts are used to store the program code and state of the application. They replace the traditional server-side component in a regular application. However, there are some important differences to consider. Computation in smart contracts can be costly, so it's crucial to keep it minimal. It's also essential to identify which parts of the application require a trusted and decentralized execution platform. With Ethereum smart contracts, you can create architectures where multiple smart contracts interact with each other, exchanging data and updating their own variables. The complexity is limited only by the block gas limit. Once you deploy your smart contract, other developers may use your business logic in the future. There are two key considerations when designing smart contract architecture. First, once a smart contract is deployed, its code cannot be changed, except for complete removal if programmed with a specific opcode. It can be deleted if it is programmed with an accessible SELFDESTRUCT opcode, but other than complete removal, the code cannot be changed in any way.


How the upcoming Cyber Resilience Act will impact privacy

The Cyber Resilience Act has several positive implications for privacy. Firstly, by enforcing strict standards of cybersecurity in the development and production of new devices, the Act creates an ecosystem where security is ingrained in the product development cycle. Secondly, by creating the reporting obligations, the Act ensures that vulnerabilities are addressed promptly, reducing the risk of personal data breaches and protecting the privacy of individuals. Third, the Act empowers consumers by ensuring they are informed about the vulnerabilities in their devices and the measures they can take to protect their personal data. From the perspective of data controllers, particularly those who serve as manufacturers of devices regulated by the Act, compliance requirements are raised to an even higher threshold. ... Additionally, they will have to comply with reporting obligations regarding vulnerabilities, even those that have already been fixed, regardless of whether personal data was affected or not. Neglecting to fix known vulnerabilities may also result in reputational consequences for data controllers.


Crafting a cybersecurity resilience strategy: A comprehensive IT roadmap

In recent years, there has been a significant increase in the demand for cybersecurity professionals due to the growing importance of protecting sensitive information and systems from cyber threats. Organizations are allocating larger budgets to enhance their cybersecurity measures, resulting in a surge in the number of job opportunities in this field. According to the latest Cyber Security Report by Michael Page, companies are actively seeking skilled cybersecurity talent to address their security challenges. The report reveals that globally, more than 3.5 million cybersecurity jobs are expected to remain unfilled in 2023 due to a shortage of qualified professionals. This shortage has created a sense of desperation among companies, as they struggle to find suitable candidates to fill these critical roles. India is projected to have over 1.5 million vacant cybersecurity positions by 2025, underscoring the immense potential for career growth in this field. To effectively address the ever-changing risks of digitalization and increasing cyberthreats, it is crucial for organizations to implement a continuous security program. 


The rise of OT cybersecurity threats

There is a need for a separate security program for OT that includes different tools, governance, and processes. Companies can’t simply extend their IT security program to OT, as the differences between the two domains are too great. It may require two security operation centers (SOCs), which adds to the complexity and costs of cybersecurity management. Bellack explains that some CEOs or CIOs underestimate the risks associated with an OT attack. “It’s a relatively new set of risks and a lot of executives don’t understand that they are indeed in danger,” Bellack says. “Companies build smarter, faster, cheaper factories using digital technologies because it’s great for business. But it also expands their attack surface, and many people in charge don’t realize the impacts or what they need to do to protect themselves.” ... “Machines are components in a complex, revenue producing infrastructure that is a mix of physical, digital, and human elements. Safety and availability are the key focus, and security is sometimes forced to take a back seat if either of those may be compromised,” explains Boals.



Quote for the day:

"Practice isn't the thing you do once you're good. It's the thing you do that makes you good." -- Malcolm Gladwell

Daily Tech Digest - July 14, 2023

AI and privacy: safeguarding your personal information in an age of intelligent systems

AI models, including chatbots and generative AI, rely on vast quantities of training data. The more data an AI system can access, the more accurate its models should be. The problem is that there are few, if any, controls over how data is captured and used in these training models2. With some AI tools connecting directly to the public internet, that could easily include your data. Then there is the question of what happens to queries from generative AI tools. Each service has its own policy for how it collects, and stores, personal data, as well as how they store query results. Anyone who uses a public AI service needs to be very careful about sharing either personal information, or sensitive business data. New laws will control the use of AI; the European Union, for example, plans to introduce its AI Act by the end of 20233. And individuals are, to an extent, protected from the misuse of their data by the GDPR and other privacy legislation. But security professionals need to take special care of their confidential information.


Are LLMs Leading DevOps Into a Tech Debt Trap?

It depends on how we use the expertise in the models. Instead of asking it to generate new code, we could ask it to interpret and modify existing code. For the first time, we have tools to take down the “not invented here” barriers we’ve created because of the high cognitive load of understanding code. If we can help people work more effectively with existing code, then we can actually converge and reuse our systems. By helping us expand and operate within our working systems base, LLMs could actually help us maintain less code. Imagine if the teams in your organization were invested in collaborating around shared systems! We haven’t done this well today because it takes significantly more time and effort. Today, LLMs have thrown out those calculations. Taking this just one more step, we can see how improved reuse paves the way for reduction of the number of architectural patterns. If we improve our collaboration and investment in sharing code, then there is increased ROI in making shared patterns and platforms work. I see that as a tremendous opportunity for LLMs to improve operations in a meaningful way.


EU-US Data Transfer Framework will be overturned within five years, says expert

The European Commission has adopted the adequacy decision for the EU-US Data Privacy Framework after years of talks, but experts have indicated it will struggle to uphold it in court. In its decision announced on 10 July, the Commission found that the US upholds a level of protection comparable to that of the EU when it comes to the transfer of personal data. Companies that comply with the extensive requirements of the framework can access a streamlined path for transferring data from the EU to the US without the need for extra data protection measures. The framework is likely to face legal action and be overturned, according to Nader Henein, research VP of privacy and data protection at Gartner. “It takes one step closer to what the European Court of Justice needs, but it takes one where the Court of Justice needs it to take five, or ten steps,” Henein told ITPro. “Maximilian Schrems already said he was going to do it, and if not him someone else will like the EFF or multiple privacy groups. What we’re telling our clients is two to five years, depending on who raises the request, when they raise it, and who they use.”


What Does the Patchless Cisco Vulnerability Mean for IT Teams, CIOs?

The lack of patch and workaround for the vulnerability is not typical, and it likely indicates a complex issue, according to Guenther. “It signifies that the vulnerability may be deeply rooted in the design or implementation of the affected feature,” she says. With no workarounds or forthcoming patch, what can IT teams do in response to this vulnerability? Before taking a specific action, IT teams need to consider whether this vulnerability impacts their organization. “I have seen companies go into a panic, only to find out that a particular issue didn’t really affect them,” says Alan Brill, senior managing director in the Kroll Cyber Risk Practice and fellow of the Kroll Institute, a risk and financial advisory solutions company. When determining potential impact, it is important for IT teams to take a broad view. The vulnerability may not directly impact an organization, but what about its supply chain? Third-party risk is an important consideration. If an IT team determines that the vulnerability does impact their organization, what is the risk level? How likely is threat actor exploitation?


Internet has Become An AI Dumping Ground, No Solution in Sight

After realising the potential of generative AI models like GPT, people have taken a step ahead and started filling websites with junk generated by AI to get the attention of advertisers. This content aims to attract paying advertisers according to a report from the media research organisation NewsGuard. The companies behind the models generating this content have been vocal about the measures they are taking to deal with the issue but no concrete plan has yet been executed. According to the report, more than 140 major brands are currently paying for advertisements that end up on unreliable AI-written sites, likely without their knowledge. The report further clarifies that the websites in question are presented in a way that a reader could assume that it’s produced by human writers, because the site has a generic layout and content typical to news websites. Furthermore, these websites do not clearly disclose that its content is AI produced. Hence, it is high time authorities step in and take charge of not just keeping an eye on false but also non-human generated content.


Train AI models with your own data to mitigate risks

To be successful in their generative AI deployments, organizations should finetune the AI model with their own data, Klein said. Companies that take the effort to do this properly will move faster forward with their implementation. Using generative AI on its own will prove more compelling if it is embedded within an organization's data strategy and platform, he added. Depending on the use case, a common challenge companies face is whether they have enough data of their own to train the AI model, he said. He noted, however, that data quantity did not necessarily equate data quality. Data annotation is also important, as is applying context to AI training models so the system churns out responses that are more specific to the industry the business is in, he said. With data annotation, individual components of the training data are labeled to enable AI machines to understand what the data contains and what components are important. Klein pointed to a common misconception that all AI systems are the same, which is not the case.


DevOps Has Won, Long Live the Platform Engineer

A decade ago, DevOps was a cultural phenomenon, with developers and operations coming together and forming a joint alliance to break through silos. Fast forward to today and we’ve seen DevOps further formalized with the emergence of platform engineering. Under the platform-engineering umbrella, DevOps now has a budget, a team and a set of self-service tools so developers can manage operations more directly. The platform engineering team provides benefits that can make Kubernetes a self-service tool, enhancing efficiency and speed of development for hundreds of users. It’s another sign of the maturity and ubiquity of Kubernetes. ... When a technology becomes ubiquitous, it starts to become more invisible. Think about semiconductors, for example. They are everywhere. They’ve advanced from micrometers to nanometers, from five nanometers down to three. We use them in our remote controls, phones and cars, but the chips are invisible and as end users, we just don’t think about them.


How Google Keeps Company Data Safe While Using Generative AI Chatbots

“We approach AI both boldly and responsibly, recognizing that all customers have the right to complete control over how their data is used,” Google Cloud’s Vice President of Engineering Behshad Behzadi told TechRepublic in an email. Google Cloud makes three generative AI products: the contact center tool CCAI Platform, the Generative AI App Builder and the Vertex AI portfolio, which is a suite of tools for deploying and building machine learning models. Behzadi pointed out that Google Cloud works to make sure its AI products’ “responses are grounded in factuality and aligned to company brand, and that generative AI is tightly integrated into existing business logic, data management and entitlements regimes.” ... In late June 2023, Google announced a competition for something a bit different: machine unlearning, or making sure sensitive data can be removed from AI training sets to comply with global data regulation standards such as the GDPR. 


Understanding the Benefits of Computational Storage

The Storage Networking Industry Association (SNIA) defines computational storage as “architectures that provide computational storage functions (CSFs) coupled to storage, offloading host processing or reducing data movement.” The advantage of computational storage over traditional storage, LaChapelle notes, is that it pushes the computational requirement to handle data queries and processing closer to the data, thereby reducing network traffic and offloading work from compute CPUs. There are two general categories of computational storage: fixed computational storage services (FCSS); and programmable computational storage services (PCSS). “FCSS are optimized for specific, computationally intensive tasks such as inline compression of encryption at the drive,” LaChapelle says. ... There are several different approaches to computational storage, such as the integration of processing power into individual drives (in-situ processing), and accelerators that sit on the storage bus at the storage controller, not in the drives themselves.


Sustainable IT: A crisis needing leadership and change

IT leaders play a crucial role in spearheading sustainability initiatives within their organizations, yet according to the non-profit SustainableIT.org, one in four IT organizations are not supporting any ESG mandates. Why is this? Implementation challenges could present a roadblock. A lack of standards to follow to evaluate a company’s carbon footprint also presents challenges. In fact, 50% of firms surveyed in the Capgemini report say they have an enterprise-wide sustainability strategy, but only 18% have a strategy with defined goals and target timelines. ... This is where IT leadership needs to step up. IT leaders have the right relationships and are best positioned to pioneer and champion this change. These leaders have the power to ask the right questions, initiate process changes, and implement strategies that foster a more environmentally-friendly business environment. For instance, IT leaders can improve employee awareness surrounding sustainability and can streamline data processes to optimize efficiency to reduce electric consumption.



Quote for the day:

"A good leader can't get too far ahead of his followers" -- Franklin D. Roosevelt