Daily Tech Digest - July 17, 2023

EU urged to prepare for quantum cyberattacks with coordinated action plan

The narrow focus at the EU level on how to mitigate short-term quantum cybersecurity challenges, especially harvest attacks and quantum attacks on encryption, leaves member states as the frontline actors in the quantum transition, Rodr?guez said. "As of 2023, only a few EU countries have made public plans to counter emerging quantum cybersecurity threats, and fewer have put in place strategies to mitigate them, as in the case of Germany." As quantum computers develop, European action will be needed to prevent cybersecurity loopholes that can be used as attack vectors and ensure that all member states are equally resilient to quantum cyberattacks. "A Coordinated Action Plan on the quantum transition is urgently needed that outlines clear goals and timeframes and monitors the implementation of national migration plans to postquantum encryption," Rodr?guez claimed. Such a plan would bridge the gap between the far-looking objective of establishing a fully operational European Quantum Communication Infrastructure (EuroQCI) network and the current needs of the European cybersecurity landscape to respond to short-term quantum cybersecurity threats.


What the CIO role will look like in 2026

“The CIO role in 2026 will be about influencing, leading, and governing, as opposed to technology selector, integrator, configurator, and customizer. And CIOs who are not on top of this before 2026 will find themselves having to catch-up,” says Joseph Bruhin, CIO of Breakthru Beverage Group. In other words, CIOs three years out will be even farther away from the technical chief of yesteryear and closer to corporate strategist. “With every company being digital, CIOs will take on the role of the architect of the company, not just the architect of digital,” says Vipin Gupta, former chief information, strategy and digital officer at Toyota Financial Services International and the 2021 MIT Sloan CIO Leadership Award Winner. IT leaders describe the CIO of 2026 and beyond as an “influencer,” “strategic thinker,” and “eloquence communicator and leader.” They say the CIO will need to be flexible, innovative, and nimble. And they stress the need for CIOs to be even more visionary than they are today, because they’ll have a lead role in shaping the organization’s future, not just support it.


Breach Roundup: IT Worker Sentenced for Impersonation

Assigned to the investigation, Liles, an IT staff member at Oxford Biomedica, decided to manipulate the situation for personal gain. Instead of directing the ransom payment to the genuine hackers, he secretly altered the original ransom demand. Using the email account of an Oxford Biomedica board member, Liles redirected the funds to a bitcoin wallet under his control. Consequently, if the company chose to pay the ransom, the money would end up in Liles' hands rather than with the actual attackers. Liles also created an email address strikingly similar to that of the original hacker and began pressuring the employer to pay a 300,000-pound ransom. Specialists from the South East Regional Organized Crime Unit's Cyber Crime Unit became suspicious during their investigation. They identified unauthorized access to the board member's email and traced it back to Liles' home address. The charges brought against Liles included blackmail and unauthorized access to a computer with intent to commit other offenses. The court's decision is a reminder of the severe consequences that individuals who exploit their positions for personal gain may face.


Quantum Leaps: Interest and Investment in Quantum Computing

The era of quantum computing has only just begun. The pace of innovation in this nascent, emerging space is simply remarkable, experts say, especially as companies and governments around the world increase both their interest and investment in the technology. While the people working in QC (quantum computing) believe it will transform the future of computing, no one knows for sure exactly how or when, because there is simply not enough known about what today’s quantum computers can actually do. And despite its promise, quantum currently has limited applications, and only a handful of these applications are moving past research into real-life scenarios. However, with all the investment and startup activity in the quantum space, it’s safe to assume that it will reshape computing, and it may do so sooner than expected. Alan Baratz, CEO of D-Wave, points to a study from Hyperion Research, which found that more than 80% of responding companies plan to increase quantum commitments in the next 2-3 years, and one-third of those companies say they will spend more than $15 million annually on quantum computing efforts.


The biggest barrier to AI productivity is people

Most people already struggle to find the information they need, which is what led to Google’s massive search business. Within the enterprise, Roth says, roughly one-third of respondents to the 2022 Gartner Digital Worker Survey reported that they frequently struggle to find the information they need to do their jobs well. Perhaps worse, 22% have missed important updates because of the sheer volume of applications and information thrown at them. This is the state of workers in the pre-GenAI world. “Now throw in more content being produced at a quicker pace,” Roth says, “Emails that used to be short and to the point may now be inflated to full, polite corporatespeak by the AI.” A bad problem becomes dramatically worse as more people create more content of middling quality, trusting the AI to get the facts correct. And it often won’t; things like ChatGPT aren’t interested in truth—that’s not what they’re for or how they’re engineered. The solution to this machine-generated problem is to reinsert people into the mix. People are still needed to fact-check and do quality control. 


Unconventional Recruiting Methods That Can Help Fill The Tech Talent Gap

Partnering with local schools and nonprofit organizations can help build talent pipelines. Providing learning opportunities for students of all ages—from elementary school through college—by exposing them to various technology disciplines can generate interest and encourage them to consider professions in the field. Teaching and mentoring the next generation are crucial for employers who want to grow future talent pools organically. Speaking at schools and nonprofit organizations allows you to meet and handpick potential employees rather than simply waiting for responses to job postings. ... Another solution for expanding talent pools is creating entry-level “bench” or “evergreen” positions that allow individuals to expand their strengths and work experience by rotating through different IT disciplines. The positions are general and designed to get talented individuals into an organization with the idea that they’ll move into more permanent roles as the right fits become available.


Panic about overhyped AI risk could lead to the wrong kind of regulation

The demand for AI stories has created a perfect storm for misinformation, as self-styled experts peddle exaggerations and fabrications that perpetuate sloppy thinking and flawed metaphors. Tabloid-style reporting on AI only serves to fan the flames of hysteria further. These types of common exaggerations ultimately detract from effective policymaking aimed at addressing both immediate risks and potential catastrophic threats posed by certain AI technologies. For instance, one of us was able to trick ChatGPT into giving precise instructions on how to build explosives made out of fertilizer and diesel fuel, as well as how to adapt that combination into a dirty bomb using radiological materials. If machine learning were merely an academic curiosity, we could shrug this off. But as its potential applications extend into government, education, medicine, and national defense, it’s vital that we all push back against hype-driven narratives and put our weight behind sober scrutiny.


Want to make cybersecurity much stronger? Become a mentor

Those who have been around the world of cybersecurity for a while have long realized the importance of the chief information security officer's (CISO) role in leading teams charged with maintaining the security of corporate data and much, much more. But both freshly minted and veteran CISOs can sometimes feel they're stranded on a desert island for several reasons. They may be new to the role and acclimating to the responsibility and, of course, the accountability they are now shouldering. Others may find themselves having to rapidly garner knowledge and perspective when a situation about which they lack familiarity lands on their plate. This is where mentors and mentorship can be invaluable. So, I set out to determine what that looks like today and how accessible CISOs are to one another. ... "Mentorship in the cybersecurity field is an invaluable tool in both an individual's and an organization's maturity. CISOs who have been through the wringer have considerable wisdom to share about everything from ransomware remediation to dealing with recalcitrant CFOs," shared Craig Burland, CISO of Inversion6. 


Tales from Production: How Real-World Coders Are Using AI

Some programmers on Hacker News were using AI tools for debugging — and even “rubber duck” debugging, where describing a code’s function (and its bugs) sometimes produces crucial insights into problems. “I’ve found rubber duck debugging to be an exceptionally effective use case for ChatGPT,” one developer posted. “Often it will surprise me by pinpointing the solution outright, but I’ll always be making progress by clarifying my own thinking.” But just how good is AI at debugging its own code? One commenter complained that at the end of the day, “Sometimes it’d give completely wrong answers. It’s just not code I’d commit or let pass a code review.” Another doubted AI’s ability to fix those bugs. “They can approximate the syntax of things in their training corpus, but logic? The lights are off and nobody’s home.” But another commenter believes in AI’s potential. “I’ve already had the GPT3.5-Turbo model walkthrough and step-by-step isolate and diagnose errors. They 100% can troubleshoot and correct issues in the code..." 


DevOps and Cloud InfoQ Trends Report – July 2023

In the accompanying cloud and DevOps trends podcast discussion, the participants address the state of cloud innovation and DevOps. They agree that cloud innovation has slowed down, moving from "revolution" to "evolution". While large numbers of organizations have adopted cloud technologies, there are many enterprises that want to migrate and re-architect workloads. As for DevOps, it is still alive but has reached a stage of stagnancy in some organizations. The concept of DevOps, which aims to provide access and autonomy to create business value, is still alive, but the implementation has faced challenges. The panelists mentioned their interest in Value Stream management to unlock DevOps’s flow and value realization. The public cloud vendors have evolved from their original goal of providing on-demand access to scalable resources to focus more on offering managed services. This evolution has made cloud computing more ubiquitous. However, technology is changing rapidly around existing services, new business requirements are being discovered, and new challenges are emerging.



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome. " -- John Donahoe

Daily Tech Digest - July 16, 2023

The engines of AI: Machine learning algorithms explained

Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined. The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge. Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search.


How to Build a Cyber-Resilient Company From Day One

Despite your best proactive measures, some cyber threats will infiltrate your defenses. Reactive defenses, such as firewalls and antivirus software, help to minimize damage when these incidents occur. Firewalls monitor and control incoming and outgoing network traffic based on predetermined security rules, forming the first line of defense against cyber threats. Antivirus software complements firewalls by detecting, preventing and removing malicious software. Intrusion Detection and Prevention Systems (IDS/IPS) monitor your network for suspicious activities and potential threats, alerting you to a potential attack and, in some cases, taking action to mitigate the threat. Encryption is another valuable reactive measure that involves making your sensitive data unreadable to anyone without the appropriate decryption key, thus protecting it even if it falls into the wrong hands. Security Information and Event Management (SIEM) systems provide real-time analysis and reporting of security alerts generated by applications and network hardware. They help detect incidents early and respond promptly.


Quantum Algorithms vs. Quantum-Inspired Algorithms

Quantum-inspired algorithms refer usually to either of the two: (i) classical algorithms based on linear algebra methods — often methods known as tensor networks — that were developed in the recent past, or (ii) methods that attempt to use a classical computer to simulate the behavior of a quantum computer, thus making the classical machine operate algorithms that benefit from the laws of quantum mechanics that benefit real quantum computers. On (i), while the physics community has leveraged these methods to address problems in quantum mechanics since the 70s [Penrose], tensor networks have an independent origin as far back as the 80s in neuroscience as well, as there is nothing really quantum behind them; it really is just linear algebra. For (ii), the process of emulating a quantum system falls back on the limitations of classical hardware. It is very hard to emulate classically the full dynamics of a large quantum system for the exact same reasons that one wants to actually build a real one! So, does this mean that quantum-inspired algorithms are bogus? Not really. 


Operator survey: 5G services require network automation

"Private 5G" and "network slicing" rank second and third, respectively. Heavy Reading expects their importance and popularity to increase as additional operators deploy 5G SA and can support full autonomy. "Performance SLAs for enterprise services" is currently the lowest ranking (fifth) of all service choices but is likely to be a valuable market, especially for network slicing and private 5G. "Connected devices (e.g., cars, watches, other IoT devices)" ranks just above performance SLAs in fourth. Internet of Things (IoT) is a sizeable market within 4G, but the massive machine-type communications (mMTC) use case has yet to be realized in 5G, as technologies such as RedCap remain underdeveloped. Smaller operators have a different opinion from larger operators on the revenue growth question. For mobile operators with less than 9 million subscribers, private 5G ranks first. This result perhaps indicates that smaller operators feel they are already exploiting eMBB services and see little scope for further revenue growth with SA.


Top 5 Features your ITSM Solution Should Have

Addressing the root causes of recurring incidents and preventing them from happening again is the core of what a problem management module is designed for. Robust problem management functionality helps investigate, analyze, and identify underlying causes, leading to effective problem resolution. A reliable ITSM solution should include features such as root cause analysis, trend identification, and proactive problem identification. This should provide a structured approach to change requests, reduce the impact of incidents, and improve the overall stability of your IT environment. A comprehensive knowledge management system is a necessary asset for any IT service desk. It serves as a centralized repository of information, providing users with self-help resources, troubleshooting guides, and best practices from within the organization. A well-organized and searchable knowledge base allows users to access relevant articles and documentation for independent issue resolution. Knowledge bases reduce reliance on IT support and enable faster problem resolution. When choosing an ITSM solution with a knowledge base, look for user-friendly interfaces, easy personalization, and collaborative features.


No cyber resilience without open source sustainability

Open source sustainability is a problem: maintainers of popular software projects are often overwhelmed by issues and pull requests to the point of burnout. Donations have emerged as one solution, and are regularly provided by governments, foundations, companies, and individuals. Yet, as excerpts of recent drafts of the CRA indicate, it could threaten to undermine sustainability by potentially introducing a burdensome compliance regime and potential penalties if a maintainer decides to accept donations. The result will be less resources flowing to already under resourced maintainers. Open source projects are often multi-stakeholder: they receive contributions from developers building as individuals, volunteering in foundations, and working for companies, large and small. The current text would regulate open source projects unless they have “a fully decentralised development model.” Any project where a corporate employee has commit rights would need to comply with CRA obligations. This turns the win-wins of open source on its head. Projects may ban maintainers or even contributors from companies, and companies may ban their employees from contributing to open source at all. 


Building Trust in a Trustless World: Decentralized Applications Unveiled

In a DApp, smart contracts are used to store the program code and state of the application. They replace the traditional server-side component in a regular application. However, there are some important differences to consider. Computation in smart contracts can be costly, so it's crucial to keep it minimal. It's also essential to identify which parts of the application require a trusted and decentralized execution platform. With Ethereum smart contracts, you can create architectures where multiple smart contracts interact with each other, exchanging data and updating their own variables. The complexity is limited only by the block gas limit. Once you deploy your smart contract, other developers may use your business logic in the future. There are two key considerations when designing smart contract architecture. First, once a smart contract is deployed, its code cannot be changed, except for complete removal if programmed with a specific opcode. It can be deleted if it is programmed with an accessible SELFDESTRUCT opcode, but other than complete removal, the code cannot be changed in any way.


How the upcoming Cyber Resilience Act will impact privacy

The Cyber Resilience Act has several positive implications for privacy. Firstly, by enforcing strict standards of cybersecurity in the development and production of new devices, the Act creates an ecosystem where security is ingrained in the product development cycle. Secondly, by creating the reporting obligations, the Act ensures that vulnerabilities are addressed promptly, reducing the risk of personal data breaches and protecting the privacy of individuals. Third, the Act empowers consumers by ensuring they are informed about the vulnerabilities in their devices and the measures they can take to protect their personal data. From the perspective of data controllers, particularly those who serve as manufacturers of devices regulated by the Act, compliance requirements are raised to an even higher threshold. ... Additionally, they will have to comply with reporting obligations regarding vulnerabilities, even those that have already been fixed, regardless of whether personal data was affected or not. Neglecting to fix known vulnerabilities may also result in reputational consequences for data controllers.


Crafting a cybersecurity resilience strategy: A comprehensive IT roadmap

In recent years, there has been a significant increase in the demand for cybersecurity professionals due to the growing importance of protecting sensitive information and systems from cyber threats. Organizations are allocating larger budgets to enhance their cybersecurity measures, resulting in a surge in the number of job opportunities in this field. According to the latest Cyber Security Report by Michael Page, companies are actively seeking skilled cybersecurity talent to address their security challenges. The report reveals that globally, more than 3.5 million cybersecurity jobs are expected to remain unfilled in 2023 due to a shortage of qualified professionals. This shortage has created a sense of desperation among companies, as they struggle to find suitable candidates to fill these critical roles. India is projected to have over 1.5 million vacant cybersecurity positions by 2025, underscoring the immense potential for career growth in this field. To effectively address the ever-changing risks of digitalization and increasing cyberthreats, it is crucial for organizations to implement a continuous security program. 


The rise of OT cybersecurity threats

There is a need for a separate security program for OT that includes different tools, governance, and processes. Companies can’t simply extend their IT security program to OT, as the differences between the two domains are too great. It may require two security operation centers (SOCs), which adds to the complexity and costs of cybersecurity management. Bellack explains that some CEOs or CIOs underestimate the risks associated with an OT attack. “It’s a relatively new set of risks and a lot of executives don’t understand that they are indeed in danger,” Bellack says. “Companies build smarter, faster, cheaper factories using digital technologies because it’s great for business. But it also expands their attack surface, and many people in charge don’t realize the impacts or what they need to do to protect themselves.” ... “Machines are components in a complex, revenue producing infrastructure that is a mix of physical, digital, and human elements. Safety and availability are the key focus, and security is sometimes forced to take a back seat if either of those may be compromised,” explains Boals.



Quote for the day:

"Practice isn't the thing you do once you're good. It's the thing you do that makes you good." -- Malcolm Gladwell

Daily Tech Digest - July 14, 2023

AI and privacy: safeguarding your personal information in an age of intelligent systems

AI models, including chatbots and generative AI, rely on vast quantities of training data. The more data an AI system can access, the more accurate its models should be. The problem is that there are few, if any, controls over how data is captured and used in these training models2. With some AI tools connecting directly to the public internet, that could easily include your data. Then there is the question of what happens to queries from generative AI tools. Each service has its own policy for how it collects, and stores, personal data, as well as how they store query results. Anyone who uses a public AI service needs to be very careful about sharing either personal information, or sensitive business data. New laws will control the use of AI; the European Union, for example, plans to introduce its AI Act by the end of 20233. And individuals are, to an extent, protected from the misuse of their data by the GDPR and other privacy legislation. But security professionals need to take special care of their confidential information.


Are LLMs Leading DevOps Into a Tech Debt Trap?

It depends on how we use the expertise in the models. Instead of asking it to generate new code, we could ask it to interpret and modify existing code. For the first time, we have tools to take down the “not invented here” barriers we’ve created because of the high cognitive load of understanding code. If we can help people work more effectively with existing code, then we can actually converge and reuse our systems. By helping us expand and operate within our working systems base, LLMs could actually help us maintain less code. Imagine if the teams in your organization were invested in collaborating around shared systems! We haven’t done this well today because it takes significantly more time and effort. Today, LLMs have thrown out those calculations. Taking this just one more step, we can see how improved reuse paves the way for reduction of the number of architectural patterns. If we improve our collaboration and investment in sharing code, then there is increased ROI in making shared patterns and platforms work. I see that as a tremendous opportunity for LLMs to improve operations in a meaningful way.


EU-US Data Transfer Framework will be overturned within five years, says expert

The European Commission has adopted the adequacy decision for the EU-US Data Privacy Framework after years of talks, but experts have indicated it will struggle to uphold it in court. In its decision announced on 10 July, the Commission found that the US upholds a level of protection comparable to that of the EU when it comes to the transfer of personal data. Companies that comply with the extensive requirements of the framework can access a streamlined path for transferring data from the EU to the US without the need for extra data protection measures. The framework is likely to face legal action and be overturned, according to Nader Henein, research VP of privacy and data protection at Gartner. “It takes one step closer to what the European Court of Justice needs, but it takes one where the Court of Justice needs it to take five, or ten steps,” Henein told ITPro. “Maximilian Schrems already said he was going to do it, and if not him someone else will like the EFF or multiple privacy groups. What we’re telling our clients is two to five years, depending on who raises the request, when they raise it, and who they use.”


What Does the Patchless Cisco Vulnerability Mean for IT Teams, CIOs?

The lack of patch and workaround for the vulnerability is not typical, and it likely indicates a complex issue, according to Guenther. “It signifies that the vulnerability may be deeply rooted in the design or implementation of the affected feature,” she says. With no workarounds or forthcoming patch, what can IT teams do in response to this vulnerability? Before taking a specific action, IT teams need to consider whether this vulnerability impacts their organization. “I have seen companies go into a panic, only to find out that a particular issue didn’t really affect them,” says Alan Brill, senior managing director in the Kroll Cyber Risk Practice and fellow of the Kroll Institute, a risk and financial advisory solutions company. When determining potential impact, it is important for IT teams to take a broad view. The vulnerability may not directly impact an organization, but what about its supply chain? Third-party risk is an important consideration. If an IT team determines that the vulnerability does impact their organization, what is the risk level? How likely is threat actor exploitation?


Internet has Become An AI Dumping Ground, No Solution in Sight

After realising the potential of generative AI models like GPT, people have taken a step ahead and started filling websites with junk generated by AI to get the attention of advertisers. This content aims to attract paying advertisers according to a report from the media research organisation NewsGuard. The companies behind the models generating this content have been vocal about the measures they are taking to deal with the issue but no concrete plan has yet been executed. According to the report, more than 140 major brands are currently paying for advertisements that end up on unreliable AI-written sites, likely without their knowledge. The report further clarifies that the websites in question are presented in a way that a reader could assume that it’s produced by human writers, because the site has a generic layout and content typical to news websites. Furthermore, these websites do not clearly disclose that its content is AI produced. Hence, it is high time authorities step in and take charge of not just keeping an eye on false but also non-human generated content.


Train AI models with your own data to mitigate risks

To be successful in their generative AI deployments, organizations should finetune the AI model with their own data, Klein said. Companies that take the effort to do this properly will move faster forward with their implementation. Using generative AI on its own will prove more compelling if it is embedded within an organization's data strategy and platform, he added. Depending on the use case, a common challenge companies face is whether they have enough data of their own to train the AI model, he said. He noted, however, that data quantity did not necessarily equate data quality. Data annotation is also important, as is applying context to AI training models so the system churns out responses that are more specific to the industry the business is in, he said. With data annotation, individual components of the training data are labeled to enable AI machines to understand what the data contains and what components are important. Klein pointed to a common misconception that all AI systems are the same, which is not the case.


DevOps Has Won, Long Live the Platform Engineer

A decade ago, DevOps was a cultural phenomenon, with developers and operations coming together and forming a joint alliance to break through silos. Fast forward to today and we’ve seen DevOps further formalized with the emergence of platform engineering. Under the platform-engineering umbrella, DevOps now has a budget, a team and a set of self-service tools so developers can manage operations more directly. The platform engineering team provides benefits that can make Kubernetes a self-service tool, enhancing efficiency and speed of development for hundreds of users. It’s another sign of the maturity and ubiquity of Kubernetes. ... When a technology becomes ubiquitous, it starts to become more invisible. Think about semiconductors, for example. They are everywhere. They’ve advanced from micrometers to nanometers, from five nanometers down to three. We use them in our remote controls, phones and cars, but the chips are invisible and as end users, we just don’t think about them.


How Google Keeps Company Data Safe While Using Generative AI Chatbots

“We approach AI both boldly and responsibly, recognizing that all customers have the right to complete control over how their data is used,” Google Cloud’s Vice President of Engineering Behshad Behzadi told TechRepublic in an email. Google Cloud makes three generative AI products: the contact center tool CCAI Platform, the Generative AI App Builder and the Vertex AI portfolio, which is a suite of tools for deploying and building machine learning models. Behzadi pointed out that Google Cloud works to make sure its AI products’ “responses are grounded in factuality and aligned to company brand, and that generative AI is tightly integrated into existing business logic, data management and entitlements regimes.” ... In late June 2023, Google announced a competition for something a bit different: machine unlearning, or making sure sensitive data can be removed from AI training sets to comply with global data regulation standards such as the GDPR. 


Understanding the Benefits of Computational Storage

The Storage Networking Industry Association (SNIA) defines computational storage as “architectures that provide computational storage functions (CSFs) coupled to storage, offloading host processing or reducing data movement.” The advantage of computational storage over traditional storage, LaChapelle notes, is that it pushes the computational requirement to handle data queries and processing closer to the data, thereby reducing network traffic and offloading work from compute CPUs. There are two general categories of computational storage: fixed computational storage services (FCSS); and programmable computational storage services (PCSS). “FCSS are optimized for specific, computationally intensive tasks such as inline compression of encryption at the drive,” LaChapelle says. ... There are several different approaches to computational storage, such as the integration of processing power into individual drives (in-situ processing), and accelerators that sit on the storage bus at the storage controller, not in the drives themselves.


Sustainable IT: A crisis needing leadership and change

IT leaders play a crucial role in spearheading sustainability initiatives within their organizations, yet according to the non-profit SustainableIT.org, one in four IT organizations are not supporting any ESG mandates. Why is this? Implementation challenges could present a roadblock. A lack of standards to follow to evaluate a company’s carbon footprint also presents challenges. In fact, 50% of firms surveyed in the Capgemini report say they have an enterprise-wide sustainability strategy, but only 18% have a strategy with defined goals and target timelines. ... This is where IT leadership needs to step up. IT leaders have the right relationships and are best positioned to pioneer and champion this change. These leaders have the power to ask the right questions, initiate process changes, and implement strategies that foster a more environmentally-friendly business environment. For instance, IT leaders can improve employee awareness surrounding sustainability and can streamline data processes to optimize efficiency to reduce electric consumption.



Quote for the day:

"A good leader can't get too far ahead of his followers" -- Franklin D. Roosevelt

Daily Tech Digest - July 13, 2023

Industry groups call for changes to EU Cyber Resiliency Act

The first recommendation made by the collective is that the proposed scope of the CRA should be made narrower and clearer. "Any reference to 'remote data processing solutions' should be excluded from the scope of the CRA to ensure legal clarity, and to avoid overlaps with existing legislation and unnecessary burden," they wrote. Software as a service, platform as a service, or infrastructure as a service should not be considered within the scope of the CRA, and this clarification should be reflected in the core legal text to provide greater legal certainty and to facilitate implementation across the EU, the recommendation read. ... The second recommendation calls for a more proportionate approach to determining a product's risk-level, along with greater certainty for manufacturers to ascertain if a product is deemed a critical one. "A transparent and inclusive review process involving economic operators should be set up to determine whether a product is critical," the groups wrote. This would avoid wrongfully designating too many products as "critical," making them more expensive...


AI’s Impact on Security, Risk and Governance in a Hybrid Cloud World

To build an AI-driven compliance, security and governance solution, you must first be able to scale and learn from large data sets. To learn from the data, you must build training models for the data to be processed effectively by the AI component. These training models require the ability to analyze and operate at scale and support different training models for different use cases. Since we need to analyze and operate at scale continuously, we have moved from the underlying tech of machine learning (ML) to deep learning (DL) based on neural net technology. With this technology, we can detect, analyze and prioritize the findings. The second part of this is auto-remediation; this enables us to understand where the problem is developing and what actions, if taken, would create the biggest impact. This prioritization technique driven by AI and our proprietary technology working together creates a scenario of a self-healing environment. In this environment, a problem is addressed before it becomes a serious issue. 


9 tips for recruiting high-end IT talent

“Create a brand and reputation to attract this kind of talent to the work you do and your company’s culture,” says Drees. “That could be LinkedIn content or articles you post on your company site.” It could be stories in the news about your company or what personnel and clients say about the company in social media. ... “Give people the ability to grow, mature, and evolve,” says Majeed, whose leadership team has spent a great deal of time, thought, and money on this idea, focusing on creating a culture that nurtures and incubates talent, going so far as to build customized learning programs that encourage people to learn new technical skills and to grow their career. “We also give people so much flexibility to do what they want to do,” he says. This might sound like a distraction from work — time consuming, perhaps, or expensive. But it’s effective, he says. “It makes people more productive — they are working with passion and purpose.” ... “Leverage the engineers on your team, who are excited about the challenges they’re solving,” says Drees.


Combatting data governance risks of public generative AI tools

Integration enables users to obtain answers or sentences derived from enterprise data relevant to their queries. While publicly available generative AI tools permit natural language querying, world wide web data is not always applicable to the use case. Knowledge management solutions connect data from various data sources and business applications to consolidate the data into a central knowledge base. When it comes to querying about a customer or details of a business document, this is the only way to retrieve answers based on specific company entities. Additionally, delta crawling (i.e., crawling for new data only) certifies that the model’s data is always up to date, so users aren’t receiving old and obsolete information. ... ChatGPT and other publicly available models, like Google Bard, do not cite where their outputs came from. So, how do you know if the content came from a reliable source versus an opinionated blog or insignificant public forum? Adding the source allows users to open the corresponding document or file and view all the details to confirm accuracy and gain further insight into their query.


Civil society groups call on EU to put human rights at centre of AI Act

The groups are therefore calling on the EU institutions to draw clear limits on the use of AI by national security, law enforcement and migration authorities, particularly when it comes to “harmful and discriminatory” surveillance practices. They say these limits must include a full ban on real-time and retrospective "remote biometric identification" technologies in publicly accessible spaces, by all actors and without exception; a prohibition on all forms of predictive policing; a removal of all loopholes and exemptions for law enforcement and migration control; and a full ban on emotion recognition systems. They added the EU should also reject the Council’s attempt to include a blanket exemption for systems developed or deployed for national security purposes; and prohibit the use of AI in migration contexts to make individualised risk assessments, or to otherwise “interdict, curtail and prevent” migration. The groups are also calling for the EU to properly empower members of the public to understand and challenge the use of AI systems


The Challenges and Rewards of Zero Trust Privacy

A primary challenge that occurs with the implementation of zero trust privacy is the lack of a compliance footprint. A compliance footprint is a list of all the laws, regulations and standards the organization must adhere to. Often, companies do not have a team or individual responsible to monitor changes in the compliance landscape. Failure to do this impacts privacy compliance and the ability to implement zero trust privacy. Organizations cannot guarantee that the system architecture restricts the flow of data beyond that which is legal because they do not know their obligations. We see this today with the increase in privacy fines that have been issued for inappropriate collection and transmission of personal data. Another challenge is that organizations often start with identity and access management. When users’ access and authorization permissions are enabled for an unknown set of data elements, organizations cannot guarantee compliance with least privilege requirements.


Microsoft jumps into competitive security service edge (SSE) arena

Analysts say Microsoft, while a late to the market, will be a welcome player in the SSE arena given its large customer base. “Cisco, Palo Alto Networks, Symantec, and Zscaler have a multi-year start over Microsoft. Gaining momentum in a crowded market will take work,” wrote Dell ‘Oro Group research director, Mauricio Sanchez in a blog about the SSE announcement. “Everyone knows who Microsoft is and generally enjoys substantial goodwill among its customer base. A large salesforce and partner ecosystem will open many doors,” Sanchez stated. “Large enterprises that are strong Microsoft shops and take advantage of Microsoft’s Enterprise Licensing Agreement benefits could lead to significant uptake of Microsoft SSE solution.” Also, no other SSE vendor has the same identity vendor chops that Microsoft brings. SSE is identity-heavy, which Microsoft can exploit by owning the identity use cases end-to-end, Sanchez stated. Microsoft Windows and Office 365 clients can preview the SSE software, and it will be generally available for other operating systems later this year.


The obsession advantage in transformation

During tough times, it’s easy to look at customers as a means to an end—a way to drive revenue and help your bottom line. But that’s a terrible approach; your customer also is going through the same difficult times, and this is your chance to support them. Obsess about their pain points and learn how you can be there for them. Work from my PwC colleagues has shown that when companies wire a deep understanding of customers into their business models, operations, and decision-making, they not only increase value for customers, but gain insights that help to further differentiate the business. ... The most transformation-ready leaders look to other innovative approaches to gain new perspectives. Whether this is through conversations with executives in different industries, speaking with sports coaches or sociologists, reading and researching relevant case studies, or speaking one-to- one with more junior employees at your own company, gaining a new perspective can often lead to powerful inspiration. Don’t wait for these views to come to you, either. 


Building a Data Driven Organization

"The key lies in democratizing data assets and their utilization by providing user-friendly tools, offering literacy courses, and promoting approaches that enable employees across the organization to generate insights," he says. He adds it is not enough for top management to merely include data-driven initiatives in their business strategy -- they must visibly and consistently support the cultural transformation. "This involves actively measuring progress, recognizing early adopters as champions, and rewarding them accordingly," he says. "Holding leaders accountable for driving cultural change in their respective areas is essential." ... The data governance element is also critical, which means establishing goals, measurements, and continuous improvement practices to maximize the value derived from data and ensure user satisfaction. "Set clear objectives for data utilization, monitoring performance against these goals, and consistently refining processes to optimize data-driven practices," he says. By implementing these practices, organizations can foster a data-driven culture where employees are equipped with the necessary tools, skills, and mindset to leverage data effectively in their decision-making processes.


Leap to leader: Make yourself heard

It’s not just a matter of going into a meeting and asking for a raise or promotion. Instead, imagine how an agent or headhunter would represent you. How would they make the case for you getting the job or the raise you deserve? And remember, it’s not just your boss you have to convince; your goal is to give them specifics so that they can go make a case for you to their boss and to HR. Ground the conversation in facts. What have you accomplished? How has your work helped drive the business? Can you point to concrete ways in which you’ve added value? ... There’s a mental loop people can get caught in that might keep them from pushing for more money, whether negotiating for a raise or for a pay package that comes with the new job. “I don’t want to rock the boat,” they say to themselves. “I want to make sure things start on a positive note. I’m grateful for the opportunity.” As a result, they settle too quickly. But for more senior roles, the person on the other side of the table is expecting you to push, and they’ve probably built in some negotiating room for when you do start pushing.



Quote for the day:

"It is not fair to ask of others what you are not willing to do yourself." -- Eleanor Roosevelt

Daily Tech Digest - July 12, 2023

4 collaboration security mistakes companies are still making

If organizations don’t provide access to vetted collaboration tools, employees will likely find their own and use insecure solutions, said Sourya Biswas, technical director, risk management and governance at security consulting firm NCC Group. “Therefore, while it’s important for organizations to embrace digital collaboration, at the same time they should prevent installation and use of unapproved tools, via mechanisms such as restricted local admin access and managed browser solutions.” Even when collaboration tools are vetted and approved, organizations must be cognizant of the different collaboration platforms that each employee is allowed to access in order to prevent sensitive data from being exfiltrated and avoid providing new attack vectors for bad actors, said Michael McCracken, senior director of end user solutions at SHI International, a reseller of technology products and services. In addition, IT needs to maintain central control over these tools, said AJ Yawn, partner, risk assurance advisory at Armanino, an independent accounting and business consulting firm.


EC Says European Private Data Can Flow to Compliant US Companies

The business community had been waiting for guidance on how data privacy policy might look in the EU, says Dona Fraser, senior vice president of privacy initiatives with BBB National Programs, a nonprofit that oversees national, industry self-regulation programs. With the former EU-US Privacy Shield rendered invalid in 2020 by the European Court of Justice, new policy was needed. Fraser says companies wanted to comply and be able to safely conduct business without worry of intervention or whether or not their consumers were being treated properly, but policy was in limbo. The announcement about the new framework seems to have restored confidence in the program. “This week,” she says, “we’ve received an enormous amount of inquiries from current and past participants saying, ‘What's next, what do we do?’ The eagerness that we’re hearing in the marketplace is, for us, from a business perspective, it’s great to hear.” Logistics of the framework and the approval process for businesses still need to be worked out, Fraser says, but now the door is open for companies that halted work with data from Europe to reemerge.


CISO perspective on why boards don’t fully grasp cyber attack risks

A CISO needs to understand the knowledge and background of the board members to be able to translate technical jargon into business language and something familiar with the target audience. I approach this by relating technical jargon to everyday situations or business scenarios, something the board can easily grasp. To be effective at this style of communication, I collaborate with other business leaders outside of the technology groups to optimize business alignment. Focusing on the potential business impact of cybersecurity risk also allows a CISO to frame technical issues in terms of their consequences such as financial loss or damage to the company’s brand. It is equally important to be concise and avoid over-embellishing cyber-risks, while still focusing on the strategic objectives you are asking the board to weigh in on. To bridge the gap between board members and CISOs to promote the mitigation of cyber-risk, it is essential that a CISO enhance communication, educate board members about cybersecurity risks and promote a collaborative approach to decision making.


Data Management at Scale

If your company already has a high level of data management maturity or is decentrally organized, then you can begin with a more decentralized approach to data management. However, to align your decentralized teams, you will need to set standards and principles and make technology choices for shared capabilities. These activities need to happen at a central level and require superb leaders and good architects. I’ll come back to these points toward the end of this chapter, when discussing the role of enterprise architects. Besides the starting point, there are other aspects to take into consideration with regard to centralization and decentralization. First, you should determine your goals for the end of your journey. If your intended end state is a decentralized architecture, but you’ve decided to start centrally, the engineers building the architecture should be aware of this from the beginning. With the longer-term vision in mind, engineers can make capabilities more loosely coupled, allowing for easier decentralization at a later point in time.


Designing High-Performance APIs

By incorporating specific design principles, developers can build APIs that scale effectively and operate efficiently. Here are key considerations for building scalable and efficient APIs: Stateless Design: Implement a stateless architecture where each API request contains all the necessary information for processing. This design approach eliminates the need for maintaining a session state on the server, allowing for easier scalability and improved performance. Use Resource-Oriented Design: Embrace a resource-oriented design approach that models API endpoints as resources. This design principle provides a consistent and intuitive structure, enabling efficient data access and manipulation. Employ Asynchronous Operations: Use asynchronous processing for long-running or computationally intensive tasks. By offloading such operations to background processes or queues, the API can remain responsive, preventing delays and improving overall efficiency. Horizontal Scaling: Design the API to support horizontal scaling, where additional instances of the API can be deployed to handle increased traffic. 


Why SUSE is forking Red Hat Enterprise Linux

To understand what’s happening here, we need to go back a few years. In late 2020, Red Hat made a crucial change to CentOS Linux (the Community Enterprise Linux Operating System). For the longest time, CentOS was essentially the free (as in beer) version of Red Hat Enterprise Linux (RHEL), Red Hat’s flagship distribution. Red Hat acquired CentOS in 2014 after a lot of turmoil in the CentOS community and gained a permanent majority on the CentOS board. “The CentOS project was in trouble,” Gunnar Hellekson, Red Hat’s VP and GM for Red Hat Enterprise Linux, told me. “At the same time, we needed a way to collaborate with other communities — OpenStack in particular at the time. And we said, well, here’s an opportunity! We can take the CentOS project. Now we have something that is freely available and close enough to RHEL to do the development on — and then that gives us a way to work in the community. And then when customers move into production, they can go on to Red Hat Enterprise Linux.”


The Disconnected State of Enterprise Risk Management

Compliance, with its myriad frameworks, standards and mandates, remains the primary means by which we assess and maintain the risk posture of our national, defense and private sector entities. Compliance is how we gauge our resilience, determine shortcomings and prioritize mitigation efforts to resolve them. Compliance, ostensibly, is how we determine where to point our limited security resources in the form of controls to ensure protection against threats. And yet, while the threats occur in real time, our compliance efforts remain relegated to a historical reporting function, capturing our prior state at best or, worse yet, someone’s subjective opinion of an organization’s security posture. After all, most compliance programs today are best characterized as “opinion farming at scale,” built on surveys or manual assessments of controls by human analysts, who in turn depend on the cooperation and information of countless system owners. No matter how high you stack those opinions, they don’t turn into facts. 


Downsides to using cloud autoscaling systems

Autoscaling can reduce costs by optimizing resource utilization, but savings are not guaranteed. I have seen autoscaling systems lead to unexpected cost increases. For example, rapid and frequent scaling operations can generate additional charges that are often unexpected. This will undoubtedly happen if resources are not managed efficiently. I’ve seen unpredictable workload patterns or sudden spikes in demand trigger autoscaling processes. This results in more instances or resources provisioned, but also a potentially enormous cloud bill. The only way to work around this is to carefully analyze and forecast workload patterns to balance scalability and cost-effectiveness. ... Certain applications don’t work well with autoscaling systems. Legacy or monolithic applications that rely on static configurations or have complex interdependencies may not perform very well with autoscaling systems. Of course, there is a fix, normally rewriting a portion of the entire application to leverage autoscaling more efficiently.


Defining the CISO Role

CISOs are tasked with the strategic leadership of information security for their companies. This can entail building a cybersecurity program and overseeing the teams that execute the policies that underpin that program. The responsibilities are many and varied. For example, Heins is responsible for incident response, security engineering and operations, identity and access management, cloud and application security, and governance, risk, and compliance. Effectively implementing cybersecurity demands that CISOs spend much of their time engaging with stakeholders throughout an organization: board members, other executives, and people in other departments. They also spend part of their time on external engagement. Meg Anderson, vice president and CISO of investment management and insurance company Principal Financial Group, notes that she talks with her CISO peers about emerging threats and best practices. That part of the job can help CISOs think about how to structure their programs effectively and build a pipeline of talent for the future.


Security First! Strategies for Building Safer Software

Having security involved in the initial stages of a software development process always made sense, as with bug fixing, it is faster and cheaper to address security issues early on. But, particularly in larger enterprises, it was rarely done in practice. By the same token, individual development teams would tend not to invest in security if they saw it as the role of a dedicated security team and thus somebody else’s problem. This pushed security to the right, as one of the things that happened between development and deploying to production, where security becomes more difficult and often less effective. It also led to friction between the development and security teams, since the two groups had conflicting goals: Developers were under pressure to ship more features more quickly, and saw security as a gatekeeper, slowing down or even halting development to allow time to investigate issues. At its most extreme, developers felt, security’s ideal situation would be that nothing would be deployed to production at all — after all, if nothing is running, then nothing can get hacked.



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig