Daily Tech Digest - July 16, 2023

The engines of AI: Machine learning algorithms explained

Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined. The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge. Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search.


How to Build a Cyber-Resilient Company From Day One

Despite your best proactive measures, some cyber threats will infiltrate your defenses. Reactive defenses, such as firewalls and antivirus software, help to minimize damage when these incidents occur. Firewalls monitor and control incoming and outgoing network traffic based on predetermined security rules, forming the first line of defense against cyber threats. Antivirus software complements firewalls by detecting, preventing and removing malicious software. Intrusion Detection and Prevention Systems (IDS/IPS) monitor your network for suspicious activities and potential threats, alerting you to a potential attack and, in some cases, taking action to mitigate the threat. Encryption is another valuable reactive measure that involves making your sensitive data unreadable to anyone without the appropriate decryption key, thus protecting it even if it falls into the wrong hands. Security Information and Event Management (SIEM) systems provide real-time analysis and reporting of security alerts generated by applications and network hardware. They help detect incidents early and respond promptly.


Quantum Algorithms vs. Quantum-Inspired Algorithms

Quantum-inspired algorithms refer usually to either of the two: (i) classical algorithms based on linear algebra methods — often methods known as tensor networks — that were developed in the recent past, or (ii) methods that attempt to use a classical computer to simulate the behavior of a quantum computer, thus making the classical machine operate algorithms that benefit from the laws of quantum mechanics that benefit real quantum computers. On (i), while the physics community has leveraged these methods to address problems in quantum mechanics since the 70s [Penrose], tensor networks have an independent origin as far back as the 80s in neuroscience as well, as there is nothing really quantum behind them; it really is just linear algebra. For (ii), the process of emulating a quantum system falls back on the limitations of classical hardware. It is very hard to emulate classically the full dynamics of a large quantum system for the exact same reasons that one wants to actually build a real one! So, does this mean that quantum-inspired algorithms are bogus? Not really. 


Operator survey: 5G services require network automation

"Private 5G" and "network slicing" rank second and third, respectively. Heavy Reading expects their importance and popularity to increase as additional operators deploy 5G SA and can support full autonomy. "Performance SLAs for enterprise services" is currently the lowest ranking (fifth) of all service choices but is likely to be a valuable market, especially for network slicing and private 5G. "Connected devices (e.g., cars, watches, other IoT devices)" ranks just above performance SLAs in fourth. Internet of Things (IoT) is a sizeable market within 4G, but the massive machine-type communications (mMTC) use case has yet to be realized in 5G, as technologies such as RedCap remain underdeveloped. Smaller operators have a different opinion from larger operators on the revenue growth question. For mobile operators with less than 9 million subscribers, private 5G ranks first. This result perhaps indicates that smaller operators feel they are already exploiting eMBB services and see little scope for further revenue growth with SA.


Top 5 Features your ITSM Solution Should Have

Addressing the root causes of recurring incidents and preventing them from happening again is the core of what a problem management module is designed for. Robust problem management functionality helps investigate, analyze, and identify underlying causes, leading to effective problem resolution. A reliable ITSM solution should include features such as root cause analysis, trend identification, and proactive problem identification. This should provide a structured approach to change requests, reduce the impact of incidents, and improve the overall stability of your IT environment. A comprehensive knowledge management system is a necessary asset for any IT service desk. It serves as a centralized repository of information, providing users with self-help resources, troubleshooting guides, and best practices from within the organization. A well-organized and searchable knowledge base allows users to access relevant articles and documentation for independent issue resolution. Knowledge bases reduce reliance on IT support and enable faster problem resolution. When choosing an ITSM solution with a knowledge base, look for user-friendly interfaces, easy personalization, and collaborative features.


No cyber resilience without open source sustainability

Open source sustainability is a problem: maintainers of popular software projects are often overwhelmed by issues and pull requests to the point of burnout. Donations have emerged as one solution, and are regularly provided by governments, foundations, companies, and individuals. Yet, as excerpts of recent drafts of the CRA indicate, it could threaten to undermine sustainability by potentially introducing a burdensome compliance regime and potential penalties if a maintainer decides to accept donations. The result will be less resources flowing to already under resourced maintainers. Open source projects are often multi-stakeholder: they receive contributions from developers building as individuals, volunteering in foundations, and working for companies, large and small. The current text would regulate open source projects unless they have “a fully decentralised development model.” Any project where a corporate employee has commit rights would need to comply with CRA obligations. This turns the win-wins of open source on its head. Projects may ban maintainers or even contributors from companies, and companies may ban their employees from contributing to open source at all. 


Building Trust in a Trustless World: Decentralized Applications Unveiled

In a DApp, smart contracts are used to store the program code and state of the application. They replace the traditional server-side component in a regular application. However, there are some important differences to consider. Computation in smart contracts can be costly, so it's crucial to keep it minimal. It's also essential to identify which parts of the application require a trusted and decentralized execution platform. With Ethereum smart contracts, you can create architectures where multiple smart contracts interact with each other, exchanging data and updating their own variables. The complexity is limited only by the block gas limit. Once you deploy your smart contract, other developers may use your business logic in the future. There are two key considerations when designing smart contract architecture. First, once a smart contract is deployed, its code cannot be changed, except for complete removal if programmed with a specific opcode. It can be deleted if it is programmed with an accessible SELFDESTRUCT opcode, but other than complete removal, the code cannot be changed in any way.


How the upcoming Cyber Resilience Act will impact privacy

The Cyber Resilience Act has several positive implications for privacy. Firstly, by enforcing strict standards of cybersecurity in the development and production of new devices, the Act creates an ecosystem where security is ingrained in the product development cycle. Secondly, by creating the reporting obligations, the Act ensures that vulnerabilities are addressed promptly, reducing the risk of personal data breaches and protecting the privacy of individuals. Third, the Act empowers consumers by ensuring they are informed about the vulnerabilities in their devices and the measures they can take to protect their personal data. From the perspective of data controllers, particularly those who serve as manufacturers of devices regulated by the Act, compliance requirements are raised to an even higher threshold. ... Additionally, they will have to comply with reporting obligations regarding vulnerabilities, even those that have already been fixed, regardless of whether personal data was affected or not. Neglecting to fix known vulnerabilities may also result in reputational consequences for data controllers.


Crafting a cybersecurity resilience strategy: A comprehensive IT roadmap

In recent years, there has been a significant increase in the demand for cybersecurity professionals due to the growing importance of protecting sensitive information and systems from cyber threats. Organizations are allocating larger budgets to enhance their cybersecurity measures, resulting in a surge in the number of job opportunities in this field. According to the latest Cyber Security Report by Michael Page, companies are actively seeking skilled cybersecurity talent to address their security challenges. The report reveals that globally, more than 3.5 million cybersecurity jobs are expected to remain unfilled in 2023 due to a shortage of qualified professionals. This shortage has created a sense of desperation among companies, as they struggle to find suitable candidates to fill these critical roles. India is projected to have over 1.5 million vacant cybersecurity positions by 2025, underscoring the immense potential for career growth in this field. To effectively address the ever-changing risks of digitalization and increasing cyberthreats, it is crucial for organizations to implement a continuous security program. 


The rise of OT cybersecurity threats

There is a need for a separate security program for OT that includes different tools, governance, and processes. Companies can’t simply extend their IT security program to OT, as the differences between the two domains are too great. It may require two security operation centers (SOCs), which adds to the complexity and costs of cybersecurity management. Bellack explains that some CEOs or CIOs underestimate the risks associated with an OT attack. “It’s a relatively new set of risks and a lot of executives don’t understand that they are indeed in danger,” Bellack says. “Companies build smarter, faster, cheaper factories using digital technologies because it’s great for business. But it also expands their attack surface, and many people in charge don’t realize the impacts or what they need to do to protect themselves.” ... “Machines are components in a complex, revenue producing infrastructure that is a mix of physical, digital, and human elements. Safety and availability are the key focus, and security is sometimes forced to take a back seat if either of those may be compromised,” explains Boals.



Quote for the day:

"Practice isn't the thing you do once you're good. It's the thing you do that makes you good." -- Malcolm Gladwell

Daily Tech Digest - July 14, 2023

AI and privacy: safeguarding your personal information in an age of intelligent systems

AI models, including chatbots and generative AI, rely on vast quantities of training data. The more data an AI system can access, the more accurate its models should be. The problem is that there are few, if any, controls over how data is captured and used in these training models2. With some AI tools connecting directly to the public internet, that could easily include your data. Then there is the question of what happens to queries from generative AI tools. Each service has its own policy for how it collects, and stores, personal data, as well as how they store query results. Anyone who uses a public AI service needs to be very careful about sharing either personal information, or sensitive business data. New laws will control the use of AI; the European Union, for example, plans to introduce its AI Act by the end of 20233. And individuals are, to an extent, protected from the misuse of their data by the GDPR and other privacy legislation. But security professionals need to take special care of their confidential information.


Are LLMs Leading DevOps Into a Tech Debt Trap?

It depends on how we use the expertise in the models. Instead of asking it to generate new code, we could ask it to interpret and modify existing code. For the first time, we have tools to take down the “not invented here” barriers we’ve created because of the high cognitive load of understanding code. If we can help people work more effectively with existing code, then we can actually converge and reuse our systems. By helping us expand and operate within our working systems base, LLMs could actually help us maintain less code. Imagine if the teams in your organization were invested in collaborating around shared systems! We haven’t done this well today because it takes significantly more time and effort. Today, LLMs have thrown out those calculations. Taking this just one more step, we can see how improved reuse paves the way for reduction of the number of architectural patterns. If we improve our collaboration and investment in sharing code, then there is increased ROI in making shared patterns and platforms work. I see that as a tremendous opportunity for LLMs to improve operations in a meaningful way.


EU-US Data Transfer Framework will be overturned within five years, says expert

The European Commission has adopted the adequacy decision for the EU-US Data Privacy Framework after years of talks, but experts have indicated it will struggle to uphold it in court. In its decision announced on 10 July, the Commission found that the US upholds a level of protection comparable to that of the EU when it comes to the transfer of personal data. Companies that comply with the extensive requirements of the framework can access a streamlined path for transferring data from the EU to the US without the need for extra data protection measures. The framework is likely to face legal action and be overturned, according to Nader Henein, research VP of privacy and data protection at Gartner. “It takes one step closer to what the European Court of Justice needs, but it takes one where the Court of Justice needs it to take five, or ten steps,” Henein told ITPro. “Maximilian Schrems already said he was going to do it, and if not him someone else will like the EFF or multiple privacy groups. What we’re telling our clients is two to five years, depending on who raises the request, when they raise it, and who they use.”


What Does the Patchless Cisco Vulnerability Mean for IT Teams, CIOs?

The lack of patch and workaround for the vulnerability is not typical, and it likely indicates a complex issue, according to Guenther. “It signifies that the vulnerability may be deeply rooted in the design or implementation of the affected feature,” she says. With no workarounds or forthcoming patch, what can IT teams do in response to this vulnerability? Before taking a specific action, IT teams need to consider whether this vulnerability impacts their organization. “I have seen companies go into a panic, only to find out that a particular issue didn’t really affect them,” says Alan Brill, senior managing director in the Kroll Cyber Risk Practice and fellow of the Kroll Institute, a risk and financial advisory solutions company. When determining potential impact, it is important for IT teams to take a broad view. The vulnerability may not directly impact an organization, but what about its supply chain? Third-party risk is an important consideration. If an IT team determines that the vulnerability does impact their organization, what is the risk level? How likely is threat actor exploitation?


Internet has Become An AI Dumping Ground, No Solution in Sight

After realising the potential of generative AI models like GPT, people have taken a step ahead and started filling websites with junk generated by AI to get the attention of advertisers. This content aims to attract paying advertisers according to a report from the media research organisation NewsGuard. The companies behind the models generating this content have been vocal about the measures they are taking to deal with the issue but no concrete plan has yet been executed. According to the report, more than 140 major brands are currently paying for advertisements that end up on unreliable AI-written sites, likely without their knowledge. The report further clarifies that the websites in question are presented in a way that a reader could assume that it’s produced by human writers, because the site has a generic layout and content typical to news websites. Furthermore, these websites do not clearly disclose that its content is AI produced. Hence, it is high time authorities step in and take charge of not just keeping an eye on false but also non-human generated content.


Train AI models with your own data to mitigate risks

To be successful in their generative AI deployments, organizations should finetune the AI model with their own data, Klein said. Companies that take the effort to do this properly will move faster forward with their implementation. Using generative AI on its own will prove more compelling if it is embedded within an organization's data strategy and platform, he added. Depending on the use case, a common challenge companies face is whether they have enough data of their own to train the AI model, he said. He noted, however, that data quantity did not necessarily equate data quality. Data annotation is also important, as is applying context to AI training models so the system churns out responses that are more specific to the industry the business is in, he said. With data annotation, individual components of the training data are labeled to enable AI machines to understand what the data contains and what components are important. Klein pointed to a common misconception that all AI systems are the same, which is not the case.


DevOps Has Won, Long Live the Platform Engineer

A decade ago, DevOps was a cultural phenomenon, with developers and operations coming together and forming a joint alliance to break through silos. Fast forward to today and we’ve seen DevOps further formalized with the emergence of platform engineering. Under the platform-engineering umbrella, DevOps now has a budget, a team and a set of self-service tools so developers can manage operations more directly. The platform engineering team provides benefits that can make Kubernetes a self-service tool, enhancing efficiency and speed of development for hundreds of users. It’s another sign of the maturity and ubiquity of Kubernetes. ... When a technology becomes ubiquitous, it starts to become more invisible. Think about semiconductors, for example. They are everywhere. They’ve advanced from micrometers to nanometers, from five nanometers down to three. We use them in our remote controls, phones and cars, but the chips are invisible and as end users, we just don’t think about them.


How Google Keeps Company Data Safe While Using Generative AI Chatbots

“We approach AI both boldly and responsibly, recognizing that all customers have the right to complete control over how their data is used,” Google Cloud’s Vice President of Engineering Behshad Behzadi told TechRepublic in an email. Google Cloud makes three generative AI products: the contact center tool CCAI Platform, the Generative AI App Builder and the Vertex AI portfolio, which is a suite of tools for deploying and building machine learning models. Behzadi pointed out that Google Cloud works to make sure its AI products’ “responses are grounded in factuality and aligned to company brand, and that generative AI is tightly integrated into existing business logic, data management and entitlements regimes.” ... In late June 2023, Google announced a competition for something a bit different: machine unlearning, or making sure sensitive data can be removed from AI training sets to comply with global data regulation standards such as the GDPR. 


Understanding the Benefits of Computational Storage

The Storage Networking Industry Association (SNIA) defines computational storage as “architectures that provide computational storage functions (CSFs) coupled to storage, offloading host processing or reducing data movement.” The advantage of computational storage over traditional storage, LaChapelle notes, is that it pushes the computational requirement to handle data queries and processing closer to the data, thereby reducing network traffic and offloading work from compute CPUs. There are two general categories of computational storage: fixed computational storage services (FCSS); and programmable computational storage services (PCSS). “FCSS are optimized for specific, computationally intensive tasks such as inline compression of encryption at the drive,” LaChapelle says. ... There are several different approaches to computational storage, such as the integration of processing power into individual drives (in-situ processing), and accelerators that sit on the storage bus at the storage controller, not in the drives themselves.


Sustainable IT: A crisis needing leadership and change

IT leaders play a crucial role in spearheading sustainability initiatives within their organizations, yet according to the non-profit SustainableIT.org, one in four IT organizations are not supporting any ESG mandates. Why is this? Implementation challenges could present a roadblock. A lack of standards to follow to evaluate a company’s carbon footprint also presents challenges. In fact, 50% of firms surveyed in the Capgemini report say they have an enterprise-wide sustainability strategy, but only 18% have a strategy with defined goals and target timelines. ... This is where IT leadership needs to step up. IT leaders have the right relationships and are best positioned to pioneer and champion this change. These leaders have the power to ask the right questions, initiate process changes, and implement strategies that foster a more environmentally-friendly business environment. For instance, IT leaders can improve employee awareness surrounding sustainability and can streamline data processes to optimize efficiency to reduce electric consumption.



Quote for the day:

"A good leader can't get too far ahead of his followers" -- Franklin D. Roosevelt

Daily Tech Digest - July 13, 2023

Industry groups call for changes to EU Cyber Resiliency Act

The first recommendation made by the collective is that the proposed scope of the CRA should be made narrower and clearer. "Any reference to 'remote data processing solutions' should be excluded from the scope of the CRA to ensure legal clarity, and to avoid overlaps with existing legislation and unnecessary burden," they wrote. Software as a service, platform as a service, or infrastructure as a service should not be considered within the scope of the CRA, and this clarification should be reflected in the core legal text to provide greater legal certainty and to facilitate implementation across the EU, the recommendation read. ... The second recommendation calls for a more proportionate approach to determining a product's risk-level, along with greater certainty for manufacturers to ascertain if a product is deemed a critical one. "A transparent and inclusive review process involving economic operators should be set up to determine whether a product is critical," the groups wrote. This would avoid wrongfully designating too many products as "critical," making them more expensive...


AI’s Impact on Security, Risk and Governance in a Hybrid Cloud World

To build an AI-driven compliance, security and governance solution, you must first be able to scale and learn from large data sets. To learn from the data, you must build training models for the data to be processed effectively by the AI component. These training models require the ability to analyze and operate at scale and support different training models for different use cases. Since we need to analyze and operate at scale continuously, we have moved from the underlying tech of machine learning (ML) to deep learning (DL) based on neural net technology. With this technology, we can detect, analyze and prioritize the findings. The second part of this is auto-remediation; this enables us to understand where the problem is developing and what actions, if taken, would create the biggest impact. This prioritization technique driven by AI and our proprietary technology working together creates a scenario of a self-healing environment. In this environment, a problem is addressed before it becomes a serious issue. 


9 tips for recruiting high-end IT talent

“Create a brand and reputation to attract this kind of talent to the work you do and your company’s culture,” says Drees. “That could be LinkedIn content or articles you post on your company site.” It could be stories in the news about your company or what personnel and clients say about the company in social media. ... “Give people the ability to grow, mature, and evolve,” says Majeed, whose leadership team has spent a great deal of time, thought, and money on this idea, focusing on creating a culture that nurtures and incubates talent, going so far as to build customized learning programs that encourage people to learn new technical skills and to grow their career. “We also give people so much flexibility to do what they want to do,” he says. This might sound like a distraction from work — time consuming, perhaps, or expensive. But it’s effective, he says. “It makes people more productive — they are working with passion and purpose.” ... “Leverage the engineers on your team, who are excited about the challenges they’re solving,” says Drees.


Combatting data governance risks of public generative AI tools

Integration enables users to obtain answers or sentences derived from enterprise data relevant to their queries. While publicly available generative AI tools permit natural language querying, world wide web data is not always applicable to the use case. Knowledge management solutions connect data from various data sources and business applications to consolidate the data into a central knowledge base. When it comes to querying about a customer or details of a business document, this is the only way to retrieve answers based on specific company entities. Additionally, delta crawling (i.e., crawling for new data only) certifies that the model’s data is always up to date, so users aren’t receiving old and obsolete information. ... ChatGPT and other publicly available models, like Google Bard, do not cite where their outputs came from. So, how do you know if the content came from a reliable source versus an opinionated blog or insignificant public forum? Adding the source allows users to open the corresponding document or file and view all the details to confirm accuracy and gain further insight into their query.


Civil society groups call on EU to put human rights at centre of AI Act

The groups are therefore calling on the EU institutions to draw clear limits on the use of AI by national security, law enforcement and migration authorities, particularly when it comes to “harmful and discriminatory” surveillance practices. They say these limits must include a full ban on real-time and retrospective "remote biometric identification" technologies in publicly accessible spaces, by all actors and without exception; a prohibition on all forms of predictive policing; a removal of all loopholes and exemptions for law enforcement and migration control; and a full ban on emotion recognition systems. They added the EU should also reject the Council’s attempt to include a blanket exemption for systems developed or deployed for national security purposes; and prohibit the use of AI in migration contexts to make individualised risk assessments, or to otherwise “interdict, curtail and prevent” migration. The groups are also calling for the EU to properly empower members of the public to understand and challenge the use of AI systems


The Challenges and Rewards of Zero Trust Privacy

A primary challenge that occurs with the implementation of zero trust privacy is the lack of a compliance footprint. A compliance footprint is a list of all the laws, regulations and standards the organization must adhere to. Often, companies do not have a team or individual responsible to monitor changes in the compliance landscape. Failure to do this impacts privacy compliance and the ability to implement zero trust privacy. Organizations cannot guarantee that the system architecture restricts the flow of data beyond that which is legal because they do not know their obligations. We see this today with the increase in privacy fines that have been issued for inappropriate collection and transmission of personal data. Another challenge is that organizations often start with identity and access management. When users’ access and authorization permissions are enabled for an unknown set of data elements, organizations cannot guarantee compliance with least privilege requirements.


Microsoft jumps into competitive security service edge (SSE) arena

Analysts say Microsoft, while a late to the market, will be a welcome player in the SSE arena given its large customer base. “Cisco, Palo Alto Networks, Symantec, and Zscaler have a multi-year start over Microsoft. Gaining momentum in a crowded market will take work,” wrote Dell ‘Oro Group research director, Mauricio Sanchez in a blog about the SSE announcement. “Everyone knows who Microsoft is and generally enjoys substantial goodwill among its customer base. A large salesforce and partner ecosystem will open many doors,” Sanchez stated. “Large enterprises that are strong Microsoft shops and take advantage of Microsoft’s Enterprise Licensing Agreement benefits could lead to significant uptake of Microsoft SSE solution.” Also, no other SSE vendor has the same identity vendor chops that Microsoft brings. SSE is identity-heavy, which Microsoft can exploit by owning the identity use cases end-to-end, Sanchez stated. Microsoft Windows and Office 365 clients can preview the SSE software, and it will be generally available for other operating systems later this year.


The obsession advantage in transformation

During tough times, it’s easy to look at customers as a means to an end—a way to drive revenue and help your bottom line. But that’s a terrible approach; your customer also is going through the same difficult times, and this is your chance to support them. Obsess about their pain points and learn how you can be there for them. Work from my PwC colleagues has shown that when companies wire a deep understanding of customers into their business models, operations, and decision-making, they not only increase value for customers, but gain insights that help to further differentiate the business. ... The most transformation-ready leaders look to other innovative approaches to gain new perspectives. Whether this is through conversations with executives in different industries, speaking with sports coaches or sociologists, reading and researching relevant case studies, or speaking one-to- one with more junior employees at your own company, gaining a new perspective can often lead to powerful inspiration. Don’t wait for these views to come to you, either. 


Building a Data Driven Organization

"The key lies in democratizing data assets and their utilization by providing user-friendly tools, offering literacy courses, and promoting approaches that enable employees across the organization to generate insights," he says. He adds it is not enough for top management to merely include data-driven initiatives in their business strategy -- they must visibly and consistently support the cultural transformation. "This involves actively measuring progress, recognizing early adopters as champions, and rewarding them accordingly," he says. "Holding leaders accountable for driving cultural change in their respective areas is essential." ... The data governance element is also critical, which means establishing goals, measurements, and continuous improvement practices to maximize the value derived from data and ensure user satisfaction. "Set clear objectives for data utilization, monitoring performance against these goals, and consistently refining processes to optimize data-driven practices," he says. By implementing these practices, organizations can foster a data-driven culture where employees are equipped with the necessary tools, skills, and mindset to leverage data effectively in their decision-making processes.


Leap to leader: Make yourself heard

It’s not just a matter of going into a meeting and asking for a raise or promotion. Instead, imagine how an agent or headhunter would represent you. How would they make the case for you getting the job or the raise you deserve? And remember, it’s not just your boss you have to convince; your goal is to give them specifics so that they can go make a case for you to their boss and to HR. Ground the conversation in facts. What have you accomplished? How has your work helped drive the business? Can you point to concrete ways in which you’ve added value? ... There’s a mental loop people can get caught in that might keep them from pushing for more money, whether negotiating for a raise or for a pay package that comes with the new job. “I don’t want to rock the boat,” they say to themselves. “I want to make sure things start on a positive note. I’m grateful for the opportunity.” As a result, they settle too quickly. But for more senior roles, the person on the other side of the table is expecting you to push, and they’ve probably built in some negotiating room for when you do start pushing.



Quote for the day:

"It is not fair to ask of others what you are not willing to do yourself." -- Eleanor Roosevelt

Daily Tech Digest - July 12, 2023

4 collaboration security mistakes companies are still making

If organizations don’t provide access to vetted collaboration tools, employees will likely find their own and use insecure solutions, said Sourya Biswas, technical director, risk management and governance at security consulting firm NCC Group. “Therefore, while it’s important for organizations to embrace digital collaboration, at the same time they should prevent installation and use of unapproved tools, via mechanisms such as restricted local admin access and managed browser solutions.” Even when collaboration tools are vetted and approved, organizations must be cognizant of the different collaboration platforms that each employee is allowed to access in order to prevent sensitive data from being exfiltrated and avoid providing new attack vectors for bad actors, said Michael McCracken, senior director of end user solutions at SHI International, a reseller of technology products and services. In addition, IT needs to maintain central control over these tools, said AJ Yawn, partner, risk assurance advisory at Armanino, an independent accounting and business consulting firm.


EC Says European Private Data Can Flow to Compliant US Companies

The business community had been waiting for guidance on how data privacy policy might look in the EU, says Dona Fraser, senior vice president of privacy initiatives with BBB National Programs, a nonprofit that oversees national, industry self-regulation programs. With the former EU-US Privacy Shield rendered invalid in 2020 by the European Court of Justice, new policy was needed. Fraser says companies wanted to comply and be able to safely conduct business without worry of intervention or whether or not their consumers were being treated properly, but policy was in limbo. The announcement about the new framework seems to have restored confidence in the program. “This week,” she says, “we’ve received an enormous amount of inquiries from current and past participants saying, ‘What's next, what do we do?’ The eagerness that we’re hearing in the marketplace is, for us, from a business perspective, it’s great to hear.” Logistics of the framework and the approval process for businesses still need to be worked out, Fraser says, but now the door is open for companies that halted work with data from Europe to reemerge.


CISO perspective on why boards don’t fully grasp cyber attack risks

A CISO needs to understand the knowledge and background of the board members to be able to translate technical jargon into business language and something familiar with the target audience. I approach this by relating technical jargon to everyday situations or business scenarios, something the board can easily grasp. To be effective at this style of communication, I collaborate with other business leaders outside of the technology groups to optimize business alignment. Focusing on the potential business impact of cybersecurity risk also allows a CISO to frame technical issues in terms of their consequences such as financial loss or damage to the company’s brand. It is equally important to be concise and avoid over-embellishing cyber-risks, while still focusing on the strategic objectives you are asking the board to weigh in on. To bridge the gap between board members and CISOs to promote the mitigation of cyber-risk, it is essential that a CISO enhance communication, educate board members about cybersecurity risks and promote a collaborative approach to decision making.


Data Management at Scale

If your company already has a high level of data management maturity or is decentrally organized, then you can begin with a more decentralized approach to data management. However, to align your decentralized teams, you will need to set standards and principles and make technology choices for shared capabilities. These activities need to happen at a central level and require superb leaders and good architects. I’ll come back to these points toward the end of this chapter, when discussing the role of enterprise architects. Besides the starting point, there are other aspects to take into consideration with regard to centralization and decentralization. First, you should determine your goals for the end of your journey. If your intended end state is a decentralized architecture, but you’ve decided to start centrally, the engineers building the architecture should be aware of this from the beginning. With the longer-term vision in mind, engineers can make capabilities more loosely coupled, allowing for easier decentralization at a later point in time.


Designing High-Performance APIs

By incorporating specific design principles, developers can build APIs that scale effectively and operate efficiently. Here are key considerations for building scalable and efficient APIs: Stateless Design: Implement a stateless architecture where each API request contains all the necessary information for processing. This design approach eliminates the need for maintaining a session state on the server, allowing for easier scalability and improved performance. Use Resource-Oriented Design: Embrace a resource-oriented design approach that models API endpoints as resources. This design principle provides a consistent and intuitive structure, enabling efficient data access and manipulation. Employ Asynchronous Operations: Use asynchronous processing for long-running or computationally intensive tasks. By offloading such operations to background processes or queues, the API can remain responsive, preventing delays and improving overall efficiency. Horizontal Scaling: Design the API to support horizontal scaling, where additional instances of the API can be deployed to handle increased traffic. 


Why SUSE is forking Red Hat Enterprise Linux

To understand what’s happening here, we need to go back a few years. In late 2020, Red Hat made a crucial change to CentOS Linux (the Community Enterprise Linux Operating System). For the longest time, CentOS was essentially the free (as in beer) version of Red Hat Enterprise Linux (RHEL), Red Hat’s flagship distribution. Red Hat acquired CentOS in 2014 after a lot of turmoil in the CentOS community and gained a permanent majority on the CentOS board. “The CentOS project was in trouble,” Gunnar Hellekson, Red Hat’s VP and GM for Red Hat Enterprise Linux, told me. “At the same time, we needed a way to collaborate with other communities — OpenStack in particular at the time. And we said, well, here’s an opportunity! We can take the CentOS project. Now we have something that is freely available and close enough to RHEL to do the development on — and then that gives us a way to work in the community. And then when customers move into production, they can go on to Red Hat Enterprise Linux.”


The Disconnected State of Enterprise Risk Management

Compliance, with its myriad frameworks, standards and mandates, remains the primary means by which we assess and maintain the risk posture of our national, defense and private sector entities. Compliance is how we gauge our resilience, determine shortcomings and prioritize mitigation efforts to resolve them. Compliance, ostensibly, is how we determine where to point our limited security resources in the form of controls to ensure protection against threats. And yet, while the threats occur in real time, our compliance efforts remain relegated to a historical reporting function, capturing our prior state at best or, worse yet, someone’s subjective opinion of an organization’s security posture. After all, most compliance programs today are best characterized as “opinion farming at scale,” built on surveys or manual assessments of controls by human analysts, who in turn depend on the cooperation and information of countless system owners. No matter how high you stack those opinions, they don’t turn into facts. 


Downsides to using cloud autoscaling systems

Autoscaling can reduce costs by optimizing resource utilization, but savings are not guaranteed. I have seen autoscaling systems lead to unexpected cost increases. For example, rapid and frequent scaling operations can generate additional charges that are often unexpected. This will undoubtedly happen if resources are not managed efficiently. I’ve seen unpredictable workload patterns or sudden spikes in demand trigger autoscaling processes. This results in more instances or resources provisioned, but also a potentially enormous cloud bill. The only way to work around this is to carefully analyze and forecast workload patterns to balance scalability and cost-effectiveness. ... Certain applications don’t work well with autoscaling systems. Legacy or monolithic applications that rely on static configurations or have complex interdependencies may not perform very well with autoscaling systems. Of course, there is a fix, normally rewriting a portion of the entire application to leverage autoscaling more efficiently.


Defining the CISO Role

CISOs are tasked with the strategic leadership of information security for their companies. This can entail building a cybersecurity program and overseeing the teams that execute the policies that underpin that program. The responsibilities are many and varied. For example, Heins is responsible for incident response, security engineering and operations, identity and access management, cloud and application security, and governance, risk, and compliance. Effectively implementing cybersecurity demands that CISOs spend much of their time engaging with stakeholders throughout an organization: board members, other executives, and people in other departments. They also spend part of their time on external engagement. Meg Anderson, vice president and CISO of investment management and insurance company Principal Financial Group, notes that she talks with her CISO peers about emerging threats and best practices. That part of the job can help CISOs think about how to structure their programs effectively and build a pipeline of talent for the future.


Security First! Strategies for Building Safer Software

Having security involved in the initial stages of a software development process always made sense, as with bug fixing, it is faster and cheaper to address security issues early on. But, particularly in larger enterprises, it was rarely done in practice. By the same token, individual development teams would tend not to invest in security if they saw it as the role of a dedicated security team and thus somebody else’s problem. This pushed security to the right, as one of the things that happened between development and deploying to production, where security becomes more difficult and often less effective. It also led to friction between the development and security teams, since the two groups had conflicting goals: Developers were under pressure to ship more features more quickly, and saw security as a gatekeeper, slowing down or even halting development to allow time to investigate issues. At its most extreme, developers felt, security’s ideal situation would be that nothing would be deployed to production at all — after all, if nothing is running, then nothing can get hacked.



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig

Daily Tech Digest - July 11, 2023

Multiple SD-WAN vendors can complicate move to SASE

The walls between networking and security teams must come down to deliver cloud-based security and network services across today’s sophisticated networks. “The opportunity to leverage a cloud-based architecture to enforce security policies to distributed locations and remote workers is the real value of SASE. It offers management efficiencies, it supports a modern workforce, and it supports an important integration between the network and security teams,” IDC’S Butler says. “In today’s world, when you have so many people working from home and so many distributed applications, a cloud-based security approach is really appealing.” As the market continues to evolve, vendors are boosting their capabilities – networking vendors are acquiring or developing security capabilities to offer SASE, and security providers are augmenting their product portfolios with advanced networking capabilities to offer SASE. That aligns with adoption trends; a majority (68%) of 830 respondents to an IDC survey said they would like to use the same vendor for their SD-WAN and security/SASE solution.


Decoding AI: Insights and Implications for InfoSec

AI is wonderfully adept at narrow tasks, but it is clueless beyond its specific training. It’s like a super-specialist who can thread a needle blindfolded but can’t understand why it shouldn’t sew its own fingers together. Say we task an AI with making a company network as secure as possible. It might suggest shutting down the network, preventing user access or even blocking external dataflows because, hey, it’s technically efficient! ... AI could reshape the world of cybersecurity in unimaginable ways, making our lives easier and more efficient. However, it is essential to bear in mind that AI, despite its remarkable abilities, is essentially a tool. It lacks the human touch—our capacity for intuition, empathy and understanding that extends beyond the data. AI will undoubtedly keep improving, but it is on us to guide its evolution in a way that respects our shared humanity and safeguards our values. So, the next time you see a headline touting the latest AI breakthrough, take a moment to appreciate the amazing technology—but remember that it’s not quite as “intelligent” as it might seem.


Sarah Silverman sues OpenAI, Meta over copyright infringement in AI training

The suits, filed last week in federal district court in San Francisco, argued that Microsoft-backed OpenAI and Meta didn’t have permission to use copyright works by Silverman and two other authors, Christopher Golden and Richard Kadrey, when it used them to train ChatGPT and Meta's LLaMA (Large Language Model Meta AI). It asks for injunctions against the companies to prevent them from continuing similar practices, as well as unspecified monetary damages. The heart of the lawsuit, according to the complaint, is OpenAI’s use of a data set called BookCorpus, which it said was created in 2015 for the purpose of large language model training. Much of BookCorpus, the plaintiffs say, was copied from a site called Smashwords, a host for self-published novels, which were under copyright. Additionally, the complaint alleges that there is no way that the book-based data sets used to train OpenAI came entirely from legal sources, as no legal databases offer enough content to account for the size of the “Books1” and “Books2” sets.


Law firms under cyberattack

As the UK National Cyber Security Centre (NCSC) noted in a recent report focusing on cyber threats to the legal sector, law firms handle sensitive client information that cybercriminals may find useful, including exploiting opportunities for insider trading, gaining the upper hand in negotiations and litigation, or subverting the course of justice. The potential consequences of such breaches can be severe, as the disruption of business operations can incur substantial costs. Ransomware gangs specifically target law firms to extort money in exchange for allowing the restoration of business operations. In 2020, the Solicitors Regulation Authority (SRA) published a cybersecurity review revealing that 30 out of 40 of the law firms they visited have been victims of a cyberattack. In the remaining ten, cybercriminals have directly targeted their clients through legal transactions. “While not all incidents culminated in a financial loss for clients, 23 of the 30 cases in which firms were directly targeted saw a total of more than £4m [$5m+] of client money stolen,” the SRA noted.


7 IT consultant tricks CIOs should never fall for

Making a business case - Consultants love this one. It’s where the CIO engages them to build the business case for a pet project or priority — not to determine whether there’s even a business case to be made. To make one, starting with the predetermined answer and working backward from there, employing such questionable practices as cherry-picked data, one-sided analyses, inappropriate statistical tests, and selective anecdotes to name a few, defining and justifying a strategic program whose success depends on … surprise! … a major engagement for the consultant’s employer. ... Win, then hire - This is less common for delivery teams than the consultants whose work resulted in the win that created the need for the delivery team, but still … Few consultancies keep a bench of any size. As a result, winning an engagement is often far more stressful than losing one, because after winning an engagement the consultancy has no more than a month or so to hire the staff needed to execute the engagement, familiarize the newly hired staff with the methodology and practices the engagement calls for, and build a working relationship with their new managers.


Why Qubit Connectivity Matters

Of course, high-connectivity architectures are not without disadvantages. High connectivity relies on the ability to shuttle qubits around, and shuttling qubits carries several potential issues. Shuttling qubits can be a relatively slow process compared to the speed of quantum gate operations. This can increase the total computation time and reduce the number of operations that can be performed before the qubits lose coherence. The process of moving qubits introduces the risk of decoherence, which is the loss of the quantum state due to interaction with the environment. Shuttling qubits also adds an extra layer of complexity to the design of the computer, and this can be challenging to implement, especially in a large-scale system. In summary, qubit connectivity plays a vital role in the performance and functionality of quantum computers. It impacts the implementation of quantum algorithms, the creation of quantum entanglement, error correction, and the overall scalability, speed, and efficiency of quantum computing systems. When one considers the quantum modality of choice for their application, qubit connectivity should be one of the factors taken under consideration.


Analysts: Cybersecurity Funding Set for Rebound

A lot of the optimism has to do with enterprises continuing to invest heavily in cybersecurity, despite a slowdown in other expenditures. Market research firm IDC expects that organizations will spend some $219 billion this year on security products and services — or some 13% more than they did in 2022 — to address threats, to support hybrid work environments, and to meet compliance requirements. The areas that will receive the most spending are managed security services, endpoint security, network security, and identity and access management. "While the theme of conservatism and expectations for continued headwinds have remained throughout the first half of the year, we do expect to see strategic activity slowly begin to rebound in the second half of 2023 and into 2024," says Eric McAlpine, founder and managing partner of analyst firm Momentum Cyber. Financing and M&A activity will both eventually pick up as companies that were able to make do financially so far begin to feel the need for fresh capital to fuel their business, he says.


Why Enterprises Should Merge Private 5G With Programmable Communications

5G private networks provide an opportunity to integrate the application and the network so that the two can inform one another, allowing adjustments to be made in real time. Businesses not only have an improved network with a private cellular network, but they can also sync their applications with the network’s performance, enabling multiple tasks to be completed based on network performance at a specific moment. ... A new generation of digital engagement providers is looking at how these communication platforms evolve into platforms that integrate across a range of business processes. They are not only leveraging robust voice, video and messaging solutions but also introducing fully programmable computer vision and audio analytics solutions. This combination of communications and AI-based media analytics and programmability makes this evolved communications platform an ideal and unexpected solution to Industry 4.0 business needs. New communication platforms are focused less on meeting one business need but rather on the integration of communications to evolve and inform applications, making adjustments and building cost-effective efficiencies.


5 ways to prepare a new cybersecurity team for a crisis

Not all security incidents cause an enterprise-level crisis, and not all crises are cyber-related. Natural disasters, product recalls, accidents, and public relations debacles are all examples of non-cyber events that could have a significant negative impact on an organization. So, in preparing a new cybersecurity team for a crisis, it is important to define and rank--first, by severity and then by likelihood--what precisely the business would define as a security “crisis,” says John Pescatore director of emerging security trends at the SANS Institute. “It is not the case that the top of the list will always be something like ransomware,” Pescatore says. Sometimes, a crisis might have nothing to do with cybersecurity, he notes. “For example, I remember hearing a Boston-area hospital CIO talk about how they were bombarded with attempts to get into hospital data after the [Boston Marathon] bombing because press reports had noted the bombers went to that hospital.” Once the cybersecurity team has an understanding of what would constitute a security crisis for the company, create playbooks for the top handful of them.


Writing your company’s own ChatGPT policy

To help employees grasp and embrace key basics quickly, one useful starting point can be signposting relevant parts of existing policies they can check for best practices. Producing tailored guidance for an internal ChatGPT policy is slightly more complex. To develop a truly all-encompassing ChatGPT policy, companies will likely need to run extensive cross-business workshops and individual surveys which enable them to identify, and discuss, every use case. Putting in this groundwork, however, will allow them to build specific directions which ultimately ensure better protection, as well as giving workers the comprehensive knowledge required to make the most of advanced tech. ... Explicitly highlighting threats and setting unambiguous usage limitations is also just as critical to leave no room for accidental misuse. This is particularly important for businesses where generative AI may be deployed to streamline tasks that involve some level of PII, such as drafting client contracts, writing emails, or suggesting which code snippets to use in programming.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault