Daily Tech Digest - October 26, 2023

CTOs Look to Regain Control of the IT Roadmap

Putting an emphasis on modular architecture and open standards can ensure easier integration or disengagement from specific solutions, thereby mitigating these concerns. ... instead of an expensive and time consuming “rip and replace” model, organizations are extending the life and value of their existing ERP investments and shifting their newly freed up resources to drive innovation “around the edges” of their current robust ERP core. “This approach applies to all industries and sizes, enabling organizations to minimize churn and focus on customer value, competitive advantage and growth,” he says. The survey also indicated IT leaders are exploring alternatives to subscription-based licensing models, focusing on optimizing operational costs and aligning investments with business strategies for growth and innovation. “Applications that enable competitive advantage and differentiate a company are a high priority for organizations, while for example, ERP administration functions like HR and finance offer very little differentiation and are frequently retained as a foundational core, optimized for cost and efficiency,” Rowe explains.


Measure Developer Joy, Not Productivity, Says Atlassian Lead

So, when senior leadership is under pressure to show the outcome of one of their most sizable operating expenses, what’s a tech company to do? First, Boyagi suggested, change your questions. Instead of “How do I increase developer productivity?” or “How can I measure developer productivity?” try “How can I make developers happier?” and “How can I help developers be more productive?” The questions can help steer the conversation in a more useful direction: “I think every company has to go on a journey and do what’s right for them in terms of productivity. But I don’t think I think measurement is the thing we should be talking about.” First, because productivity for knowledge workers has always been one of the hardest things to measure. And, he added, because we need to take inspiration from other companies, not replicate what they do. Boyagi doesn’t suggest you try to do what Atlassian does. But feel free to take inspiration from and leverage its DevEx strategy, as well as those from the likes of other high-performing organizations like Google, Netflix, LinkedIn and Spotify.


How much cybersecurity expertise does a board need?

For companies who have still not yet built up the cybersecurity expertise among its directors and reporting committees, there’s work to do, says Lam, who explains there are a number of ways to build up that "cyber-IQ". “One is you should get the right board talent in terms of risk and cyber expertise that’s appropriate to their risk profiles,” says Lam, who explains that companies leery of using up a hotly contested director seat for a cyber specialist simply need to broaden their recruitment parameters. ... As organizations slowly morph their board composition, they also need to be careful to not get into a situation where one director is solely responsible for cybersecurity oversight and no one else minds that area of risk, warns Chenxi Wang ... “There’s been an explosive offering of cyber governance training in recent years. While that is a great step in the right direction, a lot of them vary as far as the quality of content goes,” Shurtleff tells CSO. “You can’t substitute somebody’s cyber experience and knowledge from a lifetime of professional experience into a two-week course. ...”


What is a business intelligence analyst? A key role for data-driven decisions

The role is becoming increasingly important as organizations move to capitalize on the volumes of data they collect through business intelligence strategies. BI analysts typically discover areas of revenue loss and identify where improvements can be made to save the company money or increase profits. This is done by mining complex data using BI software and tools, comparing data to competitors and industry trends, and creating visualizations that communicate findings to others in the organization. ... It’s a role that combines hard skills such as programming, data modeling, and statistics with soft skills such as communication, analytical thinking, and problem-solving. Candidates need a well-rounded background to balance the line between IT and the business, and usually a bachelor’s degree in computer science, business, mathematics, economics, statistics, management, accounting, or a related field. If you have a degree in an unrelated field but have completed courses in these subjects, that can suffice for an entry-level role in some organizations. Other senior positions may require an MBA, but there are plenty of BI jobs that require only an undergraduate degree.


Infrastructure teams need multi-cloud networking and security guardrails

The key is to ensure that the technology implemented is actually providing a guardrail and not imposing a speedbump or roadblock. Network and security teams need to provide infrastructure and services that are programmatic and easy to use. For instance, DevOps should be able to request IP addresses, spin up secure DNS services, request changes to firewall policies, or adjust transit routing with a couple clicks. If approvals are required from network and security teams, those approvals should be automated as much as possible. This drive toward programmatic services is apparent in my research at Enterprise Management Associates (EMA). For instance, I recently surveyed 351 IT professionals about their multi-cloud networking strategies for the report “Multi-Cloud Networking: Connecting and Securing the Future.” (Check out EMA’s free webinar to learn more about what we found in that research). In that report, 82% of respondents told us that it was at least somewhat important for their multi-cloud networking solutions to have open APIs.


Demystifying the top five OT security myths

“A common belief is that the OT protocols are proprietary, and the attacker doesn’t have access to OT devices or specific proprietary protocols,” he said. “To some extent, the proprietary nature of the OT device does pose a challenge to hacking, but threat actors behind targeted attacks are usually knowledgeable, persistent and resourceful.” Goh said such threat actors, particularly those backed by nation-states, have the resources to replicate an OT system, and create and rigorously test their malware in a lab before launching an attack. “This possibility is highly speculated in the Triton malware attack, which happened in 2017 in a malicious attempt to destroy and damage a petrochemical plant in Saudi Arabia by targeting the safety system,” he added. ... In the concept of defence-in-depth, firewalls are used to separate the different layers of an OT network. Goh said while it is mandatory to use firewalls to protect an OT network from unauthorised access, this protection is only as good as the policy and the security of the firewall. “We all know that misconfigurations of firewall rules happen and are not uncommon,” he said, citing a study that found one in five firewalls have one or two configuration issues.


JPMorgan Chase CISO explains why he's an 'AI optimist'

We've started to look at it. That's the short answer. The longer answer is, I was a bit of an AI pessimist before November of last year. Seeing ChatGPT in action for the first time and what it could do opened my mind -- perhaps many others' as well. It felt like we tipped over the precipice of an AI era. I'm an optimist about its capabilities. Most of the last nine or 10 months or so have been us trying to enable AI to use inside of the firm. We have been users of traditional AI for some time. Generative AI is newer for us in the business. We've spent the last six or seven months designing the right isolated mechanisms that are safe for us to use to produce our data. That's something we'll start doing internally as a business more broadly and think through how we use it as a cybersecurity use case. It's probably not going to be done in a generic sense in the short-term. Cybersecurity practitioners and maybe some industry consortiums need to get together to build and train the right models to support cybersecurity. It's clear to me that one, everybody's thinking about how they use AI in their tech. 


CISOs struggling to understand value of security controls data

Understanding where security controls are failing is a critical first step to mitigating cyber risk and making the right decisions. Unfortunately, only 36% of security leaders are totally confident in their security data and use it for all strategic decision making. This is a concerning finding, as without trusted data CISOs might struggle to influence senior business stakeholders and ensure the right people are held accountable for fixing security issues. ... The benefits of improving data quality and trust are clear, with 84% of security leaders believing that increasing trust in their data would help them secure more resources to protect their organization. But first there needs to be a mindset change in security leaders and the board—away from using controls data for reporting, and instead embracing it to proactively drive business decisions and stop problems before they occur. “The industry needs to change if we are to solve the CISO security controls conundrum, and Continuous Controls Monitoring (CCM) can be the catalyst. It isn’t a better reporting tool, it’s a way of knowing what to do next – making day-to-day cybersecurity firefighting easier and getting ahead of the game on strategic risk,” argues Panaseer Security Evangelist, Marie Wilcox.


How to Become a Data Governance Specialist

Generally, a DG specialist will have a bachelor’s degree in a field related to computers (information technology, computer science) and one to four years of experience. However, a combination of computer and communication skills is needed for this position. Lots of technical experience can stand in for a bachelor’s degree, but the lack of a degree will limit chances for advancements and promotions. Some employment advertisements will require a Data Governance and Stewardship certification. The certification process typically requires a degree, attending a workshop, a test, and a fair amount of experience. Certification can be difficult to get, in part because there are very few organizations offering it. This requirement may be an unrealistic expectation on the part of the employer, particularly for non-management positions. ... Much of Data Governance is actually about changing habitual behavior. When changes are made, it is common for a team to be assembled to execute the project. A Data Governance program must be presented as a practice and not a project. Projects have start and end dates. 


Has Your Architectural Decision Record Lost Its Purpose?

Sometimes the expected longevity of a decision causes a team to believe that a decision is architectural. Most decisions become long-term decisions because the funding model for most systems only considers the initial cost of development, not the long-term evolution of the system. When this is the case, every decision becomes a long-term decision. This does not make these decisions architectural, however; they need to have high cost and complexity to undo/redo in order for them to be architecturally significant. To illustrate, a decision to select a database management system is usually regarded as architectural because many systems will use it for their lifetime, but if this decision is easily reversed without having to change code throughout the system, it’s generally not architecturally significant. Modern RDBMS technology is quite stable and relatively interchangeable between vendor products, so replacing a commercial product with an open-source product, and vice versa, is relatively easy so long as the interfaces with the database have been isolated.



Quote for the day:

"The task of leadership is not to put greatness into humanity but to elicit it, for the greatness is already there." -- John Buchan

Daily Tech Digest - October 24, 2023

7 mistakes to avoid when developing RPAs

“The biggest mistake when using RPA is to fall into the trap of thinking it can automate processes, and in reality, RPA is more accurately robotic task automation (RTA),” says Aali Qureshi, SVP of Sales for the Americas at Kissflow. “RPA bots are great for automating individual, repetitive vertical tasks, but if you want to create and automate more complex horizontal processes that span an entire enterprise, you need a low-code or no-code automation tool that allows you to automate tasks and processes in order to skip hand-coding.” ... It’s not only exceptions that can be problematic, especially when deploying bots to support critical business processes. The next mistake to avoid is deploying bots to production without data validation, error detection, monitoring, and alerting. “RPA is relatively easy as long as one can assume it works correctly, or if it doesn’t, no damage will be done. But malfunctioning RPA can make a huge number of errors in a very short time,” says Hannula. One best practice is centralizing bot monitoring and alerting with the devops or IT ops teams responsible for monitoring applications and infrastructure.


How to ask the board and C-suite for security funding

Risk acceptance is the board's prerogative. So, Budiharto advises CISOs to calculate and communicate the cost of not implementing the solution, including the likelihood of a breach or exposure, and the full financial impact of such a breach or exposure (from direct losses to cleanup costs) should the funding request be denied. "To the CFO, those savings should far outweigh the TCO of implementing and managing the solution," she adds. Putting it all together, she describes a scenario where a new solution needs to be added to the existing EDR to stop ransomware in its tracks, kill it, and remediate it faster and more thoroughly than their existing EDR does. "The board will ask, 'How is that related to the bottom line?' So, I calculate the loss of revenue in productivity and loss of business and multiply that by the average days of trying to resolve a ransomware attack under the current EDR system," Budiharto explains. "These types of comparisons will help the board see the big picture, including how your solution will help avoid that big expense."


Gartner: CIOs must prepare for generative AI disruption

Beyond business leaders, Gartner noted that governments also have put in place a strong commitment to AI and are prioritising strategies and plans that recognise AI as a key technology in both private and public sectors. This includes incorporating AI into long-term national planning, which is being reinforced through the implementation of corresponding acts and regulations to bolster AI initiatives. “Implementation at a national level will solidify AI as a catalyst for enhancing productivity to boost the digital economy,” said Plummer. “Successful implementation of large-scale AI initiatives necessitates the support and collaboration of diverse stakeholders, showcasing the mobilisation and convening ability of national resources.” Among the key application areas for CIOs and IT leaders is the ability for generative AI to help IT departments manage older systems. According to Gartner, generative AI tools will be used to explain legacy business applications and create appropriate replacements, reducing modernisation costs by 70%, by 2027.


CIOs assess generative AI's risk and reward for software engineers

While most CIOs are choosing to keep generative AI tools away from production environments, it might not be long before IT professionals start using generative AI for disparate elements of the software development and engineering process. "The main message I have is to get your staff up to date and put the resources into training, and then take advantage of it," she says. "It's incredible what you can do with code generation now. I could build an entire application without knowing any JavaScript or how to code. But you must be educated on all the pluses and the minuses -- and that doesn't happen overnight." That's a sentiment that resonates with Omer Grossman, global CIO at CyberArk. In an interview with ZDNET, he suggests now is the time to start exploring generative AI. "Leaders should make decisions," he says. "And I'm emphasizing that point because if you don't make any decisions because you are risk-averse, you risk missing out." For business leaders who are thinking about how to use generative AI in areas such as software development and engineering, Grossman suggests a range of steps.


Closing ‘AI confidence gap’ key to powering its benefits for society and planet

The research by BSI, the UK-headquartered business improvement and standards company, was commissioned to launch the Shaping Society 5.0 essay collection, which explores how AI innovations can be an enable that accelerates progress. It highlights the importance of building greater trust in the technology, as many expect AI to be commonplace by 2030, for example, automated lighting at home (41%), automated vehicles (45%) or biometric identification for travel (40%). A little over a quarter (26%) expect AI to be regularly used in schools within just seven years. Interestingly, three-fifths of the respondents globally (61%) want international guidelines to enable the safe use of AI, indicating the importance of guardrails to ensure AI’s safe and ethical use and build trust. For example, safeguards on the ethical use of patient data in healthcare are important to 55% of the respondents of the survey globally. Engagement with AI is markedly higher in two of the fastest-growing economies 1. China (70%) and India (64%) already use AI every day at work.


Exponential Thinking: The Secret Sauce Of Digital Transformation

The first crucial step in embracing exponential thinking is to reframe your relationship with fear and failure. We often view challenges or setbacks as threats, paralyzing us into inaction. Instead, reframe your fears as opportunities for learning and growth. When faced with a challenge, ask yourself questions like, "What can I learn from this?" or "How can this experience help me grow?" This shift in perspective will make you more resilient and open to new experiences, which is the core foundation for exponential thinking. ... Exponential thinking, which leads to exponential growth, rarely happens in isolation; it's a team effort. Make it a point to regularly interact with people outside your immediate team and field of expertise; connect with folks from different departments and even different fields. Whether it's through inter-departmental meetings, cross-functional projects or internal hackathons, the fusion of different perspectives can ignite innovative solutions with exponential potential. In a world aiming for exponential success, an organizational culture that champions team collaboration across all departments is not just beneficial—it's imperative.


Hackers Hit Secure File Transfer Software Again and Again

Vulnerabilities continue to surface in file transfer tools. In May, Australian cybersecurity firm Assetnote alerted Citrix to a critical vulnerability in the ShareFile storage zones controller, or SZC, in its cloud-based secure file-sharing and storage service known as Citrix Content Collaboration. Citrix patched the flaw on May 11, notified customers directly about the vulnerability and helped them lock it down. Citrix also blocked unpatched hosts from connecting to its cloud component, thus limiting any hacking impact to a customer's own environment. The U.S. Cybersecurity and Infrastructure Security Agency warned in August that the Citrix ShareFile vulnerability was being actively exploited by attackers. ... Security experts have warned users of secure file transfer software to safeguard themselves, given the risk of more such attacks perpetrated by Clop or copycats. One challenge with Clop's attacks is that the group has somehow continued to obtain access to zero-day vulnerabilities in the products, meaning even fully patched software could be - and was - exploited.


How Do We Manage AI Hallucinations?

The analogy between fictitious responses produced by a machine and sensory phenomena in humans is clear: Both produce information that is not grounded in reality. Just as humans experiencing hallucinations may see vivid, realistic images or hear sounds reminiscent of real auditory phenomena, LLMs may produce information in their “minds” that appear real but is not. ... While the ultimate causes of AI hallucinations remain somewhat unclear, a number of potential explanations have emerged. These phenomena are often related to inadequate data provision during design and testing. If a limited amount of data is fed into the model at the outset, it will rely on that data to generate future output, even if the query is reliant on an understanding of a different type of data. This is known as overfitting, where the model is highly tuned to a certain type of data but incapable of adapting to new types of data. The generalizations learned by the model may be highly effective for the original data sets but not applicable to unrelated data sets.


When your cloud project is over budget

This is likely your fault since you did not plan well and missed many things that became unexpected costs or delays. Also, there are known budget issues around migrating or developing new systems and how much they cost to operate after being deployed. We’re talking about both. Not everyone is an excellent planner, but there is a discipline to project management, including metrics and estimation approaches, that most IT projects choose to ignore. They provide a rough estimate of how long and how much money should be needed to do something meaningful in the cloud. Ignoring these guidelines is never good, so let’s learn from our mistakes and improve project planning. ... Engage in proactive communication with your cloud service providers to discuss your situation and explore any potential options for cost reduction. Yes, this means begging for a discount. Providers may offer flexible pricing plans, reserved instances, or cost optimization guides since it’s their system. Also, this may mean that you have to agree to future commitments for cloud service usage that may be out of this budget period. This could be an ethical or policy no-no at your company, so check with the CFO.


Bracing for AI-enabled ransomware and cyber extortion attacks

In addition to state-sponsored attacks by APTs, governments must deal with their fair share of criminal activity as well, particularly at lower levels of government where cybersecurity resources are especially scarce. This includes attacks against police departments, public schools, healthcare systems, and others. These attacks ramped up in 2023, a trend we expect to continue as cybercriminals look to easy targets from which to steal sensitive data like PII. Ransomware groups’ success is often less about technological sophistication and more about their ability to exploit the human element in cyber defenses. Unfortunately, this is exactly the area where we can expect AI to be of the greatest use to criminal gangs. Chatbots will continue to remove language barriers to crafting believable social engineering attacks, learn to communicate believably, and even lie to get what they want. As developers release ethically dubious and amoral large language models in the name of free speech and other justifications, these models will also be used to craft novel threats.



Quote for the day:

"Many of life’s failures are people who did not realize how close they were to success when they gave up." -- Thomas Edison

Daily Tech Digest - October 22, 2023

The AI Evolution Will Happen Faster Than Computing Evolution

Compute will still evolve, but how fast is the question? The Internet led to the massively distributed data center approach, which we know as the cloud, a terrible term, but I digress. But today the power of computing can only increase so much. Moore’s Law looks increasingly impossible to keep pace with as we develop transistors the size of an atom. Infrastructure limitations are causing all sorts of headaches for software vendors who now face a litany of options for maximizing AI systems to be more efficient with precious compute resources. ... It’s all about the data and its compounding growth. Having transition data ready and analytic, too, with speed and efficiency, makes for the ability to scale AI systems. As we’ve seen, AI systems must be fast and SingleStore markets that capability with its in-memory and disk capabilities. There’s also the flexibility that customers demand — a hybrid approach that cuts across cloud services and on-premises. With SinglStore vector indexing and JSON handling, the capabilities opened further. 


Preparing for the Shift to Platform Engineering

To effectively support the transition, leaders must commit to a culture of platform engineering. Simply adopting technology isn’t enough. It needs to be backed by a thorough strategy that allows developers to truly benefit from the tools and structures of platform engineering. What does this look like? Success requires leaders and developers to encourage collaboration and break down silos between operations and development teams. It’s possible to build a bridge between developers and operations by committing to cloud migration, creating a centralized platform and investing in collaborative tools and the strategy to back it up. To engage in platform engineering requires dedication to a collaborative culture instigated from the top, empowered by overall strategic decisions and operations. This includes continued learning for developers to stay on top of new languages, trends, challenges and priorities, internally and externally. Teams are more successful when they utilize performance metrics to track workflows that help them conduct effective maintenance and improve on a consistent and ongoing basis.


Data Governance in action: the CDO, the CISO and the Perks of collaboration

Maintaining independent reporting structures for the CDO and CISO, separate from the Chief Information Officer (CIO), is crucial. That’s because when they report directly to the executive leadership or the CEO, they can provide independent updates on data governance and cybersecurity, ensuring clarity and objectivity in decision-making for critical data-related matters. Due to this arrangement, senior management will have a holistic view of risk management, compliance, and strategic decision-making, without any biases that may arise from reporting to the CIO. Biases, in this context, can manifest in several ways. For example, a CIO might prioritise IT initiatives that align with the department’s goals or budget constraints, potentially overlooking or downplaying certain data governance or security concerns. Hence, this hierarchical reporting structure, with the CIO in the middle, can unintentionally filter or influence the information that reaches senior management, which could impact their ability to make well-informed, impartial decisions.


North Korean hackers are targeting software developers and impersonating IT workers

Diamond Sleet was observed using two attack paths: the first consisted in the deployment of ForestTiger backdoor while the second deployed payloads for DLL search-order hijacking attacks. Onyx Sleet used a different attack path: After successfully exploiting the TeamCity vulnerability, the threat actor creates a user account (named krtbgt), runs system discovery commands and finally deploys a proxy tool named HazyLoad to establish persistent connection. “In past operations, Diamond Sleet and other North Korean threat actors have successfully carried out software supply chain attacks by infiltrating build environments,” Microsoft noted. North Korean state-sponsored hackers have been linked to a social engineering campaign targeting software developers through GitHub. By pretending to be a developer or a recruiter, the attacker managed to convince the victim to collaborate on a GitHub repository and ultimately download and execute malware on its device.


Five key questions about disaster recovery as a service

Almost any organisation can use DRaaS because it requires little in the way of hardware or up-front investment. However, its use is most common in organisations that want to minimise downtime, but cannot justify investment in redundant hardware, either on-premise or in a datacentre or colocation facility. This is likely to involve a trade-off between performance and recovery times, and cost. DRaaS that runs in the public cloud will be slower than dedicated systems, but it will still be faster to recover from than basic cloud-based backup or BaaS. Another application for DRaaS is where conventional DR systems are less practical. This includes branch and remote offices that may have lower bandwidth connections and little in the way of on-site IT support. There is also a trend towards use of DRaaS to provide resilience for cloud-based infrastructure. Such cloud-to-cloud disaster recovery can range from replicating entire cloud production environments or specific VMs to a secondary cloud location, to providing additional redundancy and continuity for SaaS applications and even Microsoft 365.


Blue-Green Deployment: Achieving Seamless and Reliable Software Releases

In order to reduce risks and downtime when releasing new versions or updates of an application, blue-green deployment is a software deployment strategy. It entails running two parallel instances of the same production environment, with the “blue” environment serving as a representation of the current stable version and the “green” environment. With this configuration, switching between the two environments can be done without upsetting end users. without disrupting end-users. The fundamental idea behind blue-green deployment is to automatically route user traffic to the blue environment to protect the production system's stability and dependability. Developers and QA teams can validate the new version while the green environment is being set up and thoroughly tested before it is made available to end users. ... The advantages of blue-green deployment are numerous. By maintaining parallel environments, organizations can significantly reduce downtime during deployments. 


Shaping the Future of Hybrid Quantum Algorithms for Drug Discovery

One of the main challenges of drug discovery is simulating the interaction between molecules to, for instance, predict the potency of a drug. Accurately simulating the behavior of a single molecule is tricky since the number of possible interactions with other molecules skyrockets as you increase the overall number of molecules. Computer-aided drug discovery has been around for about 40 years. However, due to limited computational powers, the first software packages had to simplify the physics and depended a lot on experimental validation—which is, to this day, a lot of trial and error. As the computational power of computers increases, and as physics models become more and more complex, we’ll be able to run more accurate simulations that not only spare us a lot of experimental testing but also allow us to develop entirely new drugs. Simplistic models haven’t previously tapped a vast chunk of the chemical search space. Quantum computing is still very early, and quantum computers have yet to demonstrate a practical advantage over supercomputers. 


A technology lawyer suggests how artificial intelligence can benefit every Indian tangibly

As impressive as AI has been so far, we are, at the time of this writing, on the brink of yet another transformation that promises to be even more dramatic. Over the past year or so, remarkable improvements in the capabilities of large language models (LLMs) have hinted at a new form of emergent ‘intelligence’ that can be deployed across a range of applications whose full scale and scope will only become evident over time. So powerful is the potential of this new technology that some of the brightest minds on the planet have called for a pause in its development out of the fear that it will lead to a SkyNet future and the genuine threat of unleashing malicious artificial general intelligence. LLMs are computer algorithms designed to generate coherent and intelligent responses to queries in a humanlike conversational manner. They are built on artificial neural networks that have typically been trained on massive data sets that allow them to learn language structure. LLMs can learn without being explicitly programmed. 


Team Topologies: A Game Changer For Your Data Governance Organization

Managing data is not only a technological task, but also an organizational one. It requires successful coordination and collaboration between different teams and stakeholders. Here, priorities, goals, and perspectives often differ, making it difficult to establish effective work processes and communication structures. Another key aspect is the clear definition of roles – such as the role of a data architect or the role of a master data manager – and their responsibilities in the context of the data organization. Without clear structures, misunderstandings and conflicts can arise, negatively impacting data management efficiency and business processes. Given these challenges, implementing effective data management and data governance practices sometimes seems daunting. However, it is a critical factor in the success of data-driven organizations, and strategies exist to overcome these challenges. One promising strategy is to apply innovative collaboration models and team structures.


Soft Skills Play Significant Role in Success of IT Professionals

A person with strong problem-solving skills typically demonstrates the ability to analyze complex issues systematically, break them down, and identify effective solutions, according to Haggarty. "They showcase critical thinking, resourcefulness, and a willingness to explore alternative approaches," she noted. "Effective problem-solvers are also skilled in evaluating potential consequences and making informed decisions." In addition, their capacity to collaborate with diverse teams also contributes to successful problem-solving in dynamic work environments. In the tech industry, networking facilitates idea exchange and exposure to diverse perspectives. Haggarty said networking is highly ranked due to its potential to foster collaboration, knowledge sharing, and professional growth. "Establishing strong professional relationships can lead to opportunities for collaboration, career advancement, and staying informed about industry trends," she said. "It can also aid with problem-solving by connecting individuals with complementary skills to address multifaceted challenges."



Quote for the day:

''If my mind can conceive it, my heart can believe it, I know I can achieve it.'' -- Jesse Jackson

Daily Tech Digest - October 20, 2023

Can anyone buy a quantum computer?

One of the primary reasons quantum computers are not readily available to the general public is their extraordinary technical requirements. These machines require an extremely controlled environment with temperatures close to absolute zero to prevent interference from external factors. Additionally, the delicate nature of qubits makes them susceptible to errors caused by even the slightest disturbances, necessitating advanced error correction techniques. Moreover, the cost of building and operating quantum computers is exorbitant. The infrastructure required to house and maintain these machines, along with the specialized equipment and expertise needed to handle them, makes them financially unattainable for most individuals or even small businesses. However, despite the current limitations, efforts are being made to democratize access to quantum computing. Some companies are exploring cloud-based quantum computing services, allowing users to access quantum computers remotely through the internet. This approach eliminates the need for users to have their own quantum hardware, making the technology more accessible to a wider audience.


The real impact of the cybersecurity poverty line on small organizations

The ‘cybersecurity poverty line’ is real! That said, I don’t believe people, processes, or technology are limiting factors because significant risk reduction is simple (technology), easy (people/process), and cheap. Bluntly, many organizations aren’t ‘brushing their teeth’ in cybersecurity. China isn’t targeting 99.9% of organizations, and ransomware isn’t advanced – things like ‘100% of people use strong MFA’ is the most cost-effective thing most organizations can do to reduce their cyber risk dramatically. ... Appreciate that Maslow’s hierarchy of needs applied to cybersecurity dictates that revenue trumps security. We have a responsibility to steward finite resources, and the fact is that most organizations can be adequately secured with a very modest budget. The limiting factor is knowledge/leadership – what to do, when, and why. ... ‘Everyone knows’ that when you are a CISO, you first do a risk assessment against a framework. This takes X months, costs Y dollars, and involves many discussions with the IT and security folks. I’d rather take a few days to talk to the various executives to understand the business and see where I can massively reduce risk while enabling the business.


Lost and Stolen Devices: A Gateway to Data Breaches and Leaks

When a computer is lost or stolen, the data it contains becomes vulnerable to unauthorized access. Despite substantial investments in endpoint security controls, devices are often not as secure as organizations would hope. This vulnerability has led to numerous high-profile data breaches over the years. ... When a computer falls into the wrong hands, unauthorized access to sensitive data becomes a real threat. Even if the device is password-protected, threat actors can employ various techniques to bypass security measures and gain access to files, emails, and other confidential information. ... Without encryption, thieves can easily access and misuse sensitive data, putting both individuals and organizations at risk. Having encryption enabled is often a legally required control, and not being able to prove its efficacy can expose an organization to liability. ... In some cases, lost or stolen computers are used as a means to gain physical access to corporate networks. If an employee’s laptop is stolen, and it contains access credentials or VPN configurations, the thief may use this information to infiltrate the organization’s network. 


Cracking the Code: Secure Software Architecture in a Generative AI World

Code vulnerabilities serve as entry points for attackers. Given the complexity of GAI models, these vulnerabilities can be nuanced. We are in the early days of using code generation to inject vulnerabilities. Now is the time to take action by keeping humans in the loop with static code analysis and code reviews. Static Code Analysis:- Conducting static code analysis(SCA) can help identify vulnerabilities in the code without running the program. This is crucial as running a program with vulnerabilities could compromise the entire system. SCA also enables compliance monitoring to standards such as Federal Information Processing Standards FIPS) and other NIST guidelines. Code Reviews:- Peer-reviewed coding practices allow for a second set of eyes to catch potential vulnerabilities, reducing the likelihood of a security breach. Make this a mandatory step in your DevSecOps process to catch and fix issues before they escalate. The intricate nature of GAI models amplifies the risks associated with code-level vulnerabilities. 


Global Chip Shortage: Everything You Need to Know

Supplies of chips began to improve in 2022, due in part to additional capacity with the slowdown in sales of PCs, smartphones and consumer electronics. Foundries in Taiwan reallocated some of this capacity to the automobile and industrial end markets, according to JP Morgan. However, automakers are increasingly requiring chips with higher computing power — especially as the industry transitions to electric and autonomous vehicles, which are significantly different from the ones used in PCs and smartphones. Other issues include tensions between the U.S. and China, which continue to impact the global supply chain. This is ” … spurring new government controls on sales of chips to China,” the world’s largest semiconductor market, the Semiconductor Industry Association noted in its State of the Industry report. There are other significant policy challenges as well, such as the ability to strengthen the U.S. semiconductor workforce by reforming the country’s high-skilled immigration and STEM education systems to increase the number of workers and help contain the talent shortage, according to the SIA.


CDO interview: Carter Cousineau, vice-president of data and model governance, Thomson Reuters

Cousineau says a key part of the work she’s undertaking at Thomson Reuters involves building the foundational elements for effective data governance. “That’s anything around applying policies and standards, and then moving those approaches into action, which involves the implementation of any controls and tools that can help, support and validate the work we’re doing in practice,” she says. .... “My approach to governance and ethics was not to build different frameworks and tools that wouldn’t be able to fit into everyone’s everyday workflows. These workflows differ greatly around the business. The way finance, for example, uses AI machine learning models is very different than product or sales,” she says. “We spent a lot of time understanding the workflows. The last thing I want to do is to make data scientists, model developers and product owners have another list of things to do. If you can make governance and ethics part of their workflows automatically, it becomes a lot easier – and we’ve done that.”


Open Source Development Threatened in Europe

European developers would stop contributing upstream to open source software projects in the event of the passage of the CRA, said Greg Kroah-Hartman, a fellow at the Linux Foundation and the maintainer of the stable branch for Linux. Furthermore, it may mean the use of Linux in Europe is untenable. ... As it stands now, the CRA burdens open source developers. It makes them liable for the open source code they share. Technologies considered “critical” face the most significant scrutiny. These critical technologies include operating systems, container runtimes, networking interfaces, password managers, microcontrollers, etc. The language may change, but it will go into the CRA unless some last-minute changes are made. The CRA calls for standards that still need to be developed. High-risk critical products like an OS would require mandatory third-party assessments. Developers must perform a cybersecurity risk assessment to ensure the product delivers without vulnerabilities. 


How to use structured concurrency in C#

Structured concurrency is a strategy for handling concurrent operations in asynchronous programming. It relies on task scopes and proper resource cleanup to provide several benefits including cleaner code, simpler error handling, and prevention of resource leaks. Structured concurrency emphasizes the idea that all asynchronous tasks should be structured within a specific context, allowing developers to effectively compose and control the flow of these tasks. To better manage the execution of async operations, structured concurrency introduces the concept of task scopes. Task scopes provide a logical unit that sets boundaries for concurrent tasks. All tasks executed within a task scope are closely monitored and their lifecycle is carefully managed. If any task within the scope encounters failure or cancellation, all other tasks within that scope are automatically canceled as well. This ensures proper cleanup and prevents re­source leaks. ... In C#, we can implement structured concurrency by using the features available in the System.Threading.Tasks.Channels namespace. This namespace offers helpful constructs like Channel and ChannelReader that make implementing structured concurrency easier.


CIOs press ahead for gen AI edge — despite misgivings

Power supply giant Generac is one company that’s all in on gen AI, says CIO Tim Dickson. “We are now fully embracing generative AI, with three innovative pilots that are live,” he says. “First, we launched a private instance of GPT-3.5 for internal enterprise exploration. Next, we launched a customer service chatbot to answer customer call questions for our customer service reps. Lastly, we tapped into our data lake to enrich and tailor specific customer emails to drive the conviction of our products and ultimately increased sales. These three programs are already delivering value for the business.” And doing so requires taking risks, he says, something he believe IT leaders must embrace to succeed today. “We are indoctrinating a culture of gen AI within the company,” he adds. Still, the widening availability of gen AI to the public at large keeps many CIOs awake at night. Few enterprises have slammed the brakes, but no doubt it has led to a high emphasis on corporate guardrails, frameworks, and shared responsibility in the C-suite.


Data Governance vs. Data Management

Data Management covers implementations of policies and procedures that do not fall under the mantle of Data Governance. Mainly, focusing on specific technologies and tools and their applications lies outside Data Governance. To understand why these Data Management activities and discussions happen outside of Data Governance, consider that Data Governance meetings mainly comprise businesspeople, councils, subject matter experts (SMEs), stewards, and partners without specialized IT knowledge. While Data Governance members want to remain informed about Data Management at a high level, they do not need the technical details. For example, Data Governance discussions may center around protecting data and creating standards around encrypting data. However, IT staff may take conversations deeper, outside of Data Governance, by discussing what encryption algorithm to use and when, how to customize it through ENCRYPT-CSA, or how big to make the critical size. By moving the technical details outside of Data Governance, organizations can focus on data-driven culture initiatives, change an organization’s approach towards data, and address other human behaviors without getting bogged down in minutia.



Quote for the day:

''The manager asks how and when, the leader aks what and why.'' -- Warren Bennis

Daily Tech Digest - October 19, 2023

Regulations are still necessary to compel adoption of cybersecurity measures

Ultimately, though, there should be clear mandates to push the industry toward clear outcomes, Rivas said. Such requirements, for example, could include a proper patch management strategy and robust monitoring system, Sondhi said. These should be accompanied by roadmaps for rollout, so market players would be given the necessary timelines to ensure compliance, he added. Acknowledging there will inevitably be pushback over concerns such mandates have on cost and time-to-market, he said regulations need not be overly complex. They also can point to accompanying standards bodies tasked to provide more details and update the adoption of best practices when necessary. This will free up governments from having to keep up with market changes and to instead focus on mandating high-level requirements, he noted. Enforcement also is a good starting point when the road toward cyber resilience may be long and fraught with complexities. Organizations in operational technology (OT) sectors, in particular, have ecosystems that have to be managed differently from IT infrastructures, Sondhi said.


Beyond The 10X Software Engineer: Focusing On The Bigger Picture

Match team responsibilities with the load they can handle. You can do with additional training, a good choice of underlying technologies, pair programming, reshuffling responsibilities among teams and strategic hiring for the critical skills still missing. For new team members, focus on tasks doable first within a four-hour time slot and then in two to three days, so they can experience repeatable success right away. With time, you can extend the average task timeline to two weeks. Make sure there's a variety of tasks of similar complexity. For example, you don't want to corner a software developer into only fixing bugs. Mix things up to challenge team members with creative tasks like minor new functionalities. Eliminate excessive bureaucracy and low-value business processes. Boring or superfluous administrative tasks are a real motivation drain, so reviewing them and removing those with low value have a visible impact. Map team competencies to task complexity. This can be done formally or informally and usually narrows down the list of competencies for each team.


The Purpose of Estimation in Scrum: Sizing for Sprint Success

In response to the limitations of hours-based estimation, Scrum Teams are turning to alternative methods such as relative estimation (using points). Alternatively, teams are increasingly using flow metrics as a simpler and often more accurate way to forecast value delivery. Relative estimation is a technique used to estimate the size, complexity, and effort required for each Product Backlog item. To use relative estimation, the Scrum Team first selects a point system. Common point systems include Fibonacci or a simple scale from 1 to five. (See our recent article How to create a point system for estimating in Scrum. Once the team has selected a point system, they create an agreement which describes which type of items to classify as a 1, 2, and so on. It is possible for the team just to use its judgment, documenting that information in their team agreements. Then, when the team needs to estimate new work, they simply compare the new work to similar work done in the past, and assign the appropriate number. 


What Enterprises Need to Know About ChatGPT and Cybersecurity

Receiving the most valuable information from ChatGPT requires asking the correct questions and expanding on the initial inquiry to obtain desired results and a deeper understanding. Hackers are learning that they cannot ask ChatGPT a directly malicious question, or they will receive a response such as, “I do not create malware.” Instead, they ask it to pretend that ChatGPT is an AI model that can execute a particular script. Bad actors continue to exploit and socially engineer the process of installing malware or getting people to relinquish credentials for unauthorized data system access. AI tools are making it easier for cybercriminals to harm people. ... One noteworthy point is that the ability to use AI to manipulate humans through social engineering is becoming increasingly controllable. However, ChatGPT is not a Rosetta Stone-like translator for hackers. Although both AI-generated scripts and social media platform scripts are made by machines, their complexity, reliability and security can differ significantly.


Weathering the Storm: A Guide to Preserving Business Continuity

Organizations that are most vulnerable to disruption tend to be those that rely on legacy systems that have a single point of communications failure. The additional risk exposure that accompanies these older networks may well justify shifting to a cloud-based network (such as SD-WAN, a software-defined wide area network) that provides the flexibility to bounce between broadband and ethernet in real time to preserve bandwidth and connectivity. Similarly, it may be worth considering moving to a unified communications platform, which is designed to maintain multichannel communications for customers and employees. ... Based on the risk assessment, create a formal, highly detailed plan specifying how your organization will manage various crisis scenarios, the tools it will use to keep the business running, and how, and by whom, information will be communicated internally and externally. The plan also should identify critical on-premises hardware and brick-and-mortar IT infrastructure (such as data centers) that must be protected, and how they will be protected. Organizations with a continuity plan already in place should revisit it at least annually and update it as needed.


Phishing emails are more believable than ever. Here’s what to do about it

Because most ransomware is delivered through phishing, employee education is essential to protecting your organization from these threats. That said, there’s no single “one size fits all” education program--these training efforts should be tailored to your enterprise's unique needs. Below are several types of services and/or programs that are designed to help users understand and detect phishing and other cyber threats, all of which can serve as a great starting point for building a comprehensive employee security awareness program. ... Delivering simulated phishing emails to your organization’s employees allows them to practice identifying malicious communications so that they know what to do when a threat actor strikes. The FortiPhish Phishing Simulation Service uses real-world simulations to help organizations test user awareness and vigilance to phishing threats and to train users on what steps to take when they suspect they might be a target of a phishing attack. ... As with the introduction of any new technology, cybercriminals will continually find ways to use these tools for nefarious purposes. 


9 Steps to Platform Engineering Hell

The platform team still works with a DevOps mindset and continues to write pipelines and automation for individual product teams. They get too many requests from developers and don’t have the time or resources to zoom out and come up with a long-term strategy to build a scalable IDP and ship it as a product to the rest of the engineering organization. ... More platform engineers are finally hired on the team, all very experienced, with years working in operations. They come together and think hard about the biggest Ops issues they experienced during their careers. They start designing a platform to fix all those annoying issues that bugged them for years, but developers will never use this platform. It doesn’t solve their problems; it only solves Ops problems. ... Because you’re a large enterprise with inefficient cross-unit communication, mid-management starts several platform engineering initiatives without aligning with each other. Leadership doesn’t intervene, efforts double, communication is not facilitated and gets progressively worse. You end up with five platforms for five teams, most of which don’t work at all.


The must-knows about low-code/no-code platforms

Low-code/no-code platforms inadvertently make it easy to bypass the procedural steps in production that safeguard code. This issue can be exacerbated by a workflow’s lack of developers with concrete knowledge of coding and security, as these individuals would be most inclined to raise flags. From data breaches to compliance issues, increased speed can come at a great cost for enterprises that don’t take the necessary steps to scale with confidence. ... Maintaining a strong team of professional developers and guardrail mechanisms can prevent a Wild West scenario from emerging, where the desire to play fast and loose creates security vulnerabilities, mounting technical debt from a lack of management and oversight happening at the developer level, and inconsistent development practices that spur liabilities, software bugs, and compliance headaches. AI-powered tools can offset complications caused by acceleration and automation through code governance and predictive intelligence mechanisms however, enterprises often find themselves with a piecemealed portfolio of AI tools that create bottlenecks in their development and delivery processes or lack proper security tools to ensure the quality of code.


What It Takes To Architect A Culture Of Cybersecurity

Just because organizations impart mandatory compliance and security awareness training to their employees does not mean employees will act securely. This is because of something called the knowledge-behavior gap. Having knowledge does not mean that people behave in a certain way. For them to transition from behavior to knowledge, they also need “acceptance” and “intent.” Think of it like the speed limit sign we consciously choose to ignore. We know the sign’s there, we know it’s against the law to exceed it, we know that speeding kills, and yet we choose to turn a blind eye. Since most organizations do not actively manage and cultivate their security culture, they assume that it does not exist in their organization. The reality is that every organization, regardless of size, has a culture. The way in which organizations and leadership teams treat, value, and manage security, influences and builds its security culture. Unfortunately, most organizations do not track the security-related aspects of their culture in its early stages and eventually, it ends up spiraling out of control and manifesting into something the organization may have difficulty reversing.
A semantic layer allows business users with little or no technical skills to access and consume data without needing to understand the underlying technical complexities. It makes data more accessible and understandable to non-technical users, enabling them to easily query, analyze, and make informed decisions based on the data. ... Integrating data into a semantic layer from multiple sources -- each with its own structure, format, and levels of detail -- can be a complex undertaking. The process of harmonizing these sources demands time and meticulous attention to detail. Creating intricate business views using precise calculations within the semantic layer presents yet another challenge. Applying complex formulas, conditional rules, and computations across multiple data sources is a grueling task. Mapping business metrics with consistency in calculations and hierarchies across diverse BI tools can be highly complicated as each tool handles it in a different manner. ... You’ll need a scalable and efficient semantic layer that is adept at collaborating with multiple BI tools. 



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous