Daily Tech Digest - October 10, 2023

Crafting Leaders: The finishing touches

The process of narrowing the funnel for identifying future leaders must commence soon after fresh talent is inducted within the organization and certainly long before organizational knocks have bled the spirit, energy and desire-to-be-different from these young men and women. An earlier column explained how alternative fast-track schemes function and ways to choose and groom future leaders from early stages. 2 More recently, I have added two coda to the exposition. When choosing leaders for facing the uncertainties of tomorrow it is not enough to capture their capabilities at the time of selection but take into account the steepness of the slope they have traversed to reach there. 3 That is the best guarantee of future resilience and continued development in spite of handicaps. Moreover, constraints of time and shortage of the right kind of teachers prevent those running to the top of the pyramid from formally refreshing their knowledge and capabilities as frequently as they should. ... The grooming of Fast-Trackers (FTers) must vary substantially from company to company and from individual to individual.


The undeniable benefits of making cyber resiliency the new standard

"It's about practicing due care and due diligence from a cybersecurity standpoint and having a layered defense with a layered people-process-and-technology-driven program with the right governance and services and tools to enable the mission of the organization so that if there's an event, you can recover and adapt to keep business running," he adds. To do that, CISOs and their executive colleagues must have their cybersecurity basics well established -- basics such as knowing their tolerance for risk, understanding their IT environment, their security controls, their vulnerabilities, and how those all could impact the organization's operations. CISOs aren't limited to these frameworks or the assessment tools created specifically to measure cyber resiliency, says Tenreiro de Magalhaes and others. CISOs can also run tabletop drills and red-team exercises to test, measure and report on resiliency. Repeating such drills and exercises can then track whether the organization's cybersecurity program as well as specific additions to it help improve resiliency over time, experts say.


Hybrid work is in trouble. Here are 4 ways to make it work in the longer term

"We're all humans and we work with each other," he says. "To make hybrid working effective, there must be an element of interaction. There must be a connectivity, both to the business and your team." Warne says balance is essential, so find the right reasons for bringing people together in the office. "At River Island, it's about making sure that people are in for a purpose and not just presenteeism, and making sure that the people who need to work together are able to work together," he says. "If you work with a colleague, it's crucial you don't have a situation where one of you comes into the office and the other one works from home." Warne says his team doesn't have mandated days in the office. Instead, his organization's hybrid-working strategy is all about collaboration. ... However, hybrid working has allowed for an even higher level of flexibility in her organization -- and the key to success has been constant communication. Cousineau continues to listen to feedback from her team. One staff member suggested hybrid all-team meetings were creating a big divide between those who were present and those who weren't.


Evolution of stronger cyber threat actors: The flip side of Gen AI story

Deepfake technology, a subset of Generative AI, allows threat actors to create convincing video and audio forgeries. This presents a substantial threat to organisations as deepfake attacks can tarnish reputations, manipulate public opinion, and even influence financial markets. Imagine a scenario where a CEO’s voice is convincingly mimicked, disseminating false information that impacts stock prices; or consider a deepfake video of a prominent figure endorsing a product or idea they never actually supported. Such manipulations can lead to severe consequences for businesses and society at large. Generative AI is revolutionising the way malware is created. Threat actors can use AI algorithms to generate highly evasive and adaptable malware variants that can easily evade traditional signature-based antivirus solutions. These AI-generated malware strains constantly evolve, making detection and containment a significant challenge for cybersecurity professionals. Moreover, Generative AI allows for the customisation of malware based on the target environment. 


The CIO’s primary job: Developing future IT leaders

The challenge for IT management is to find people who are good at their current job but are also interested in the management side that is necessary for departmental success. In my opinion, the reason many IT departments have decided to go outside IT to bring in CIOs is because IT has not fostered the kind of environment that develops these types of professionals. IT has not traditionally tried very hard to develop strong managers from within. Most people learn to manage by watching what their managers do. And if people have bad managers, the results can be less than optimum. So how do we change that conundrum? First, we must commit our current managers and supervisors to a strong management training program. Once they have been trained in the subtleties of management, then we hopefully will begin to see new managers with skills developed from within. Effective management training can, and should be, structured around techniques that current managers use to be successful. Delegating effectively and encouraging career growth among staff are two examples.


Evolution of Data Partitioning: Traditional vs. Modern Data Lakes

In modern data lakes, data is organized into logical partitions based on specific attributes or criteria, such as day, hour, year, or region. Each partition acts as a subset of the data, making it easier to manage, query, and optimize data retrieval. Partitioning enhances both data organization and query performance. Instead of relying solely on directory-based partitioning or basic column-based partitioning, these systems provide support for complex, nested, and multi-level partitioning structures. This means that data can be partitioned using multiple attributes simultaneously, allowing for highly efficient data pruning during queries. ... Snapshots are a fundamental concept used to capture and manage different versions or states of a table at specific points in time. Snapshots are a key feature that enables Time Travel, data auditing, schema evolution, and query consistency within modern Data Lakes like Iceberg tables. Some important features of snapshots are below : Each snapshot represents a specific version of the data table. When you create a snapshot, it essentially freezes the state of the table at the moment the snapshot is taken. 


Will Quantum Computers Become the Next Cyber-Attack Platform?

A quantum cyberattack would likely be similar to today’s identity theft and data breaches. “The only difference is that the damage would be more widespread, since quantum computers could attack a broad class of encryption algorithms rather than just the particular way that a company or data center implements the algorithm, which is how attacks are currently done,” explains Eric Chitambar, associate professor of electrical and computer engineering at the Grainger College of Engineering at the University of Illinois Urbana-Champaign. Chitambar also leads the college’s Quantum Information Group. ... Conducting an enterprise-wide quantum risk assessment to help identify systems that might be most vulnerable to a quantum attack would be a good place to start, Staab says. He also recommends deploying enterprise-wide Quantum Random Number Generator (QRNG) technology to generate quantum-resistant encryption keys. This approach promises crypto agility, implementation of Quantum Key Distribution (QKD) and the development of quantum-resistant algorithms. “As we head toward a quantum computing era, adopting a zero-trust architecture will become more important than ever,” Staab states.


6 Reasons Private LLMs Are Key for Enterprises

Private LLMs can be used with sensitive data — such as hospital patient records or financial data — and then use the power of generative AI to produce groundbreaking achievements in these fields. With the LLM running on your private infrastructure and only exposed to the people who should have access to it, you can build powerful customer-focused applications, chatbots or just provide an easier way for your employees to interact with your company data — without the risk of sending the data to a third party. ... With private LLMs, you can tailor the model and response to your company, industry or customers’ needs. Such specific information is not likely to be included in general or public LLMs. You can feed your LLM with customer support cases, internal knowledge-base articles, sales data, application usage data and so much more, ensuring that the responses you receive are what you’re looking for. ... Controlling versioning or the model you’re using is extremely important because if you change the model that you use to create embeddings, you will need to re-create (or version) all the embeddings you store.


Tech Revolution: The Rise of Automation and Its Impact on Society

To offset potential adverse effects, it is imperative for companies and governments to enact policies and initiatives supporting workers susceptible to automation’s impact. This may encompass training programs designed to furnish workers with the requisite skills to excel in the evolving job market, along with social support programs to aid those grappling with employment challenges. Public policy will emerge as a pivotal determinant of technological evolution’s trajectory and consequences. Economic incentives, education reforms, and immigration policies will directly influence productivity, employment levels, and enhanced economic mobility. ... Central and state government agencies ought to collaborate with industry partners and educational institutions to craft programs that equip new workers with the skills needed to thrive in an automation-driven world. These programs bear the potential to combat emerging inequality by propelling education and training initiatives that foster success for all.


When open source cloud development doesn't play nice

Remember that the cloud provider is merely “providing” the open source software. They are not typically supporting it beyond that. For more, you’ll need to look internally or in other places. Open source users, whether in the cloud or not, often have to rely on community resources, typically provided through forums or message boards, which takes time. This can impede cloud development progress in urgent, time-sensitive scenarios or complex issues. A developer told me once that she needed to attend a meeting of the open source community before she could have a resolution to a specific problem—a meeting that was five weeks out. That won’t work. From a security standpoint, open source software can pose specific challenges. Although a community of developers regularly reviews such software, it can still harbor undetected vulnerabilities, primarily because its code is openly accessible. For instance, some open source supply chain issues arose a few years ago. These vulnerabilities can become severe security threats without stringent security measures and frequent updates. 



Quote for the day:

''Sometimes it takes a good fall to really know where you stand.'' -- Hayley Williams

Daily Tech Digest - October 08, 2023

How AI is enhancing anti-money laundering strategies for improved financial security

Financial institutions collect massive volumes of transactional data daily, making it impractical for human experts to review each transaction for signs of money laundering manually. AI systems, on the other hand, can efficiently process this data, flagging transactions that exhibit unusual patterns or deviate from established norms. These AI systems utilise advanced algorithms to develop customer behavior profiles, creating a baseline against which future transactions can be compared. Any deviation from the norm, such as sudden large transfers, frequent cash deposits, or transactions with high-risk jurisdictions, triggers an alert for further investigation. This allows institutions to focus their resources on genuinely suspicious activities rather than drowning in false positives. Analysing data to recognise suspicious activities: AI algorithms excel at analysing enormous datasets, identifying hidden patterns and correlations that could signify money laundering activities. By examining transaction history and customer behavior, AI-enabled tools can uncover links between seemingly unrelated events.


Record Numbers of Ransomware Victims Named on Leak Sites

At current levels, 2023 is on course to be the biggest year on record for victim naming on so-called ‘name and shame’ sites since this practice began in 2019. It is expected the 10,000th victim name was posted to leak sites in late summer 2023, but this has not yet been confirmed by Secureworks. ... The 2023 report found that ransomware median dwell time was under 24 hours, representing a dramatic fall from 4.5 days during the previous 12 months. In 10% of cases, ransomware was deployed within five hours of initial access. Smith believes this trend is due to improved cyber detection capabilities, with cyber-criminals speeding up their operations to reduce the chances of being stopped before deploying ransomware. “As a result, threat actors are focusing on simpler and quicker to implement operations, rather than big, multi-site enterprise-wide encryption events that are significantly more complex. But the risk from those attacks is still high,” commented Smith.


Cloud backup and disaster recovery evolve toward maturity

At the end of the day, backup as a service is kind of just that. It operates like a regular backup application, using a schedule and point-in-time backups. DRaaS is more about failing over if something comes up as a disaster recovery process. It's designed to replicate or restore data environments automatically; it doesn't transform data in the same sense that a backup may have a particular data format. DRaaS is about moving the data from point A to point B and being able to get back to it as quickly as possible, especially in the context of a failover. ... But with the flexibility that cloud data protection affords, a lot of these solutions can essentially get updated whenever you log on because they're SaaS-based. Also, there's so much data in the cloud now and lots of investment in digital transformation, new platforms and cloud-native applications, which is triggering some rethinking of cloud data protection strategies. All of this I think is shortening the review cycles. It's actually a domino effect: Data protection follows data production. 


Mitigating Security Fatigue: Safeguarding Your Remote Team Against Cyberthreats

It’s easy for remote workers to feel disconnected from their teams and employers, which is why it’s important to keep communication consistent. Having the right collaboration tools can make all the difference in keeping remote workers engaged and more likely to follow security protocols. Video calls can help team members meet face-to-face, reducing miscommunication and misunderstandings. It’s also important to have an easy way to collaborate on projects so everyone can stay on the same page and work moves forward efficiently. Of course, any technology you use should be easy to use and easy to keep secure. With the right communication tools, your remote team members can collaborate effectively, stay connected with team members, and generally remember that they aren’t at home alone — they belong to a larger organization. This feeling of connection will encourage and remind them to implement the company’s security standards even though they work from home. As remote work becomes more popular, the need for strong security practices becomes even more vital. 

One might be inclined to believe (from the Trellix example) that the returns and competitive business risks of adopting and not adopting AI in cyber-security processes are quite high from a sales perspective. This point can be rationalised by seminal academic theory in the strategic management sciences. Based on insights from the widely popular Five Forces strategy model by Michael Porter of the Harvard Business School, the threat of new entrants (Trellix competitors), product substitutes (competitor products churned from AI-driven platforms like HVS), high bargaining power of customers (clients of Trellix-like products), and low bargaining power of suppliers (Trellix) should push enterprises to necessarily adopt AI as a cyber-security strategy to boost sales. ... On top of everything, AI as a business strategy for the modern IT/OT-driven business ecosystems has the potential to adhere very well with certain elements of the seminal Eight-Fold strategy proposed by Michael Cusumano of the MIT Sloan School of Management for software-driven businesses


How to Stay Ahead of the Regulatory Curve with Robust Data Governance?

Establishing a data governance culture requires the right combination of people, process, and technology. Defining the right roles and responsibilities (people) and developing the right data governance framework (process) are steps in the right direction. But without the right tools (technology), it becomes difficult at best for a data governance culture to succeed. A data catalog is a critical tool for organizations looking to establish a data governance culture. It gives business users, many of whom are not data experts, clarity on data definitions, synonyms, and essential business attributes so they can understand and use their data more effectively. Data catalogs show who owns the data, allowing for greater collaboration across the business. They provide a self-service way for everyone in the organization to find the data they need and turn what used to be tribal knowledge into useful and accessible information that they can use to make better business decisions.


Preparing for the Unexpected: A Proactive Approach to Operational Resilience

No firm can achieve operational resilience purely on its own. Intelligence sharing within the global financial community helps firms understand current and emerging threats and learn how others are mitigating them. It keeps larger institutions at the forefront of cybersecurity while arming smaller firms with knowledge and tools to protect themselves. It is so critical to operational resilience that DORA dedicates an entire article to it. Beyond regulation, the public sector is also increasingly collaborating with the private sector to protect critical infrastructure, which includes the financial sector. Around the world, organizations including the US Treasury Department's Hamilton Series and NATO's Locked Shields regularly conduct large-scale exercises to test that communication and coordination channels will function efficiently during major incidents. The goal is not only to minimize operational disruption but to proactively maintain public calm and trust. Operational risks are no longer geographically bound. Cross-border intelligence sharing and exercises help financial institutions build a comprehensive approach to operational resilience.


The Top 10 Hurdles to Blockchain Adoption

One of the most significant factors that has made blockchain adoption more difficult is the overall age of the average person using banking services. Unlike previous generations, the current demographic in the world is older than ever. Advancements in healthcare and other factors have increased life expectancy in most regions of the world. ... Energy consumption issues remain a top problem in the market. Conservationists have repeatedly pointed out that networks that leverage the Proof-of-Work consensus algorithm are power-hungry. The reason for this consumption is that the PoW system requires users to exercise their computational power as part of the validation structure. To combat these issues, there has been a steady migration of mining farms to renewables. ... Another issue that has held back blockchain adoption is the lack of supportive legislation for these projects. When there is a lack of governmental support, financial institutions are wary of joining an industry. The main reason for the concern is that they fear later regulatory pushback.


Redefining the Framework of Innovation

The impact of ecosystems on digital disruption today does draw sharp parallels to another important technological evolution. Specifically, it brings to mind the evolution of manufacturing and distribution technology which enabled the transition from vertical integration to multi-tier supply networks. The twist is ecosystem models look forward, not back in the value chain, enabling entire new value chains. However, while there are many clear benefits of ecosystems, these business models are contractually, logistically, and commercially complex. This is especially true when you factor in the challenges of partnering with early-stage tech companies. So, where should leaders begin when considering a partnership or alliance? Take inventory of your most critical innovation paths and evaluate them against the ecosystem model. Key criteria may include needs for outside expertise and intellectual capital, a reduction in capital risk and accelerated innovation delivery to the market. Focus time and resources on selecting the right ecosystem partner. 


Identifying The Right Risk Appetite For Your Business

While risk appetite has a traditional outlook, risk tolerance (or impact tolerance) helps companies move closer to the path of resilience. If risk appetite tells us how much risk an organization can take, risk tolerance indicates how much risk an organization "wants" to take in numbers. Essentially, tolerances are defined losses that an organization is willing to incur in meeting an objective. Every decision bears risks. If a business accepts risk or incurs loss due to a risk event that exceeds the agreed-upon risk appetite and tolerance levels, then serious fiscal, legal and reputational consequences can occur. For this reason, risk appetite should be reevaluated and reconciled whenever changes occur to strategic initiatives or the business environment. ... Risk appetite as a concept is not new, but what is trending is linking them to resilience programs so that organizations take the right amount of risk to meet business objectives while ensuring sustainability, employee health and safety and stakeholder well-being.



Quote for the day:

"The secret of success in life is for a man to be ready for his opportunity when it comes." -- Benjamin Disraeli

Daily Tech Digest - October 07, 2023

No Need to Have a 'FOBO' for AI

It is a well- known fact that before AI takes your job, someone using AI will take it. To stay relevant in the job market, it is then absolutely essential to adopt AI and automation tools to enhance one's productivity to ensure that his or her job is not rendered obsolete. Here are some strategies, which will help one stay ahead of the curve and be able to effectively compete and thrive in the fast paced and dynamic world of employment. ... Being Human: Human beings have evolved over centuries of evolution to become a superior race and embracing human emotions like empathy, gratitude, compassion, zeal to strive for the betterment of our fellow human beings will always keep us ahead of the game. This is what distinguishes us from machines. Interdisciplinary skills: Consider developing skills across multiple disciplines and combining them will make one more versatile and valuable to the employers. Problem Solving: It cannot be understated more, that problem solving and our ability to think critically to solve the complex problems around us will make us stay ahead of the machines. 


Driving Digital Transformation Through Model-Based Systems Engineering

Digital engineering is revolutionizing important areas such as the health care industry. From sophisticated imaging devices and robotic surgical systems to telemedicine platforms that connect doctors and patients across vast distances, each of these systems depends on the integration of numerous complex components, and each must operate seamlessly to ensure optimal performance. A key approach that relates systems engineering to digital transformation and digital engineering is model-based systems engineering (MBSE). Whereas traditional systems engineering relies on document-based approaches to support systems engineering activities (e.g., text-based requirements and design documents), MBSE does so by relying on digital system models instead. In essence, MBSE supports traditional systems engineering. It doesn’t replace it; rather, it offers an approach that aims to make systems engineering more efficient. 


Optimize Your Observability Spending in 5 Steps

You can’t use an observability agent on its own to put these steps into practice. Agents are simply neutral forwarders, sending out information to be processed downstream in the observability analysis tools. You could implement some of these steps using open source tools and in-house development, but this comes with increased operational cost and complexity, requiring your team to build expertise that is not core to your business. Overall, the main challenge with putting these steps into practice is that the available tools are either like agents, which simply send information, or like observability tools, which simply receive it. You need to be able to process telemetry data in stream, to be able to transform and route it as it passes from agent to tool, to optimize and shape it for your downstream requirements. Our Mezmo Telemetry Pipelines were conceived with the goal of helping organizations get better control of their data in stream. This approach enables you to control the flow between your data sources and your observability tools, and manage in detail the optimization of your data before it arrives downstream.


Why AI Regulations Are Needed to Check Risk and Misuse

Adopting a new technology poses certain risks, especially if it has not been previously deployed. That calls for certain risk mitigation strategies, such as testing, sandboxing, proof of concepts, and taking smaller steps such as minimum viable product, before complete adoption. Mahadevan believes there will always be risks and that we "amplify the risk" to a large extent today. "Companies need to follow a framework and put together a risk mitigation panel, rather than focus on the risk itself. I insist that AI and the risk mitigation should become a part of the blueprint. And this is not a job for a CIO alone, it is a job for a CHRO, the risk manager, and for operations," Mahadevan said. Deep fake and the violation of one's privacy is a hotly debated topic in the industry today. Thomas said deep fake will lead to many scams, causing victims to lose a lot of money. It is also a violation of one's privacy, and poses a substantial risk at an individual level. Deep fake technology uses a form of artificial intelligence called deep learning to create convincing videos, photo or audio clips of a subject, which are used for misinformation campaigns or to defraud/deceive relatives or friends.


New kind of quantum computer made using high-resolution microscope

It is unlikely to compete any time soon with the leading approaches to quantum computing, including those adopted by Google and IBM, as well as by many start-up companies. But the tactic could be used to study quantum properties in a variety of other chemical elements or even molecules, say the researchers who developed it. At some level, everything in nature is quantum and can, in principle, perform quantum computations. The hard part is to isolate quantum states called qubits — the quantum equivalent of the memory bits in a classical computer — from environmental disturbances, and to control them finely enough for such calculations to be achieved. Andreas Heinrich at the Institute for Basic Science in Seoul and his collaborators worked with nature’s ‘original’ qubit — the spin of the electron. Electrons act like tiny compass needles, and measuring the direction of their spin can yield only two possible values, ‘up’ or ‘down’, which correspond to the ‘0’ and ‘1’ of a classical bit. 


Net-zero carbon data centers: Expanding capacity amid evolving policy and regulation

The sting in the tail for data center developers, is that emissions associated with the IT process load are now to be included in the calculation. Given that the annual energy consumption of even a modestly sized facility could run to hundreds of thousands of megawatt hours (MWh), this represents a very substantial cost for developers – unless they can drive their on-site emissions down below the 35 percent threshold. Outside of London, there is currently no policy for carbon offsetting, but it seems likely that other local authorities will follow London’s lead and introduce similar schemes in the future. In some regions, particularly the Nordic’s, planning policy has been introduced requiring new data centers to provide waste heat to local district heating infrastructure, or to be ‘heat network ready’ for connection to future schemes. Whilst a policy of promoting heat reuse may not lead to a direct reduction in data center emissions, it is seen as an important step towards decarbonizing the wider community, by displacing other, more carbon intensive, sources of heat.


6 Key Personality Traits for Disruptive Innovation Leaders

“Disruptive innovators require a mindset focused on leapfrogging – creating or doing something radically new or different that produces a significant leap forward,” said Hightech Partners. “Disruptive leaders ensure that everything they do adds value to the market.” ... For companies, it is important that leaders understand how to continually push the limits of their teams, organizations, and partners. Some believe that disruptive leaders should also push boundaries. “Leaders who travel a lot, surrounding themselves with diverse people and entrepreneurs, are able to continually expand their mindset and creative problem solving abilities,” said the report. ... Disruptive leaders manage incredible levels of uncertainty. “Adaptive planning is an approach where actions lead to results and leaders take the opportunity to reflect on and learn from these actions and results,” said Hightech Partners. “Then, they can modify their assumptions and approaches accordingly.” ... The word “normal” doesn’t exist in a disruptive leader’s vocabulary, says the report. “Once something has become normal, it’s probably obsolete,” said Hightech Partners. 


Enterprise architecture creating sustainable business value

“If you imagine a company with a C-suite in the penthouse and the IT department maybe in the basement, and then the business department somewhere in between, enterprise architects are able to ride the elevator and they have the capability to exit the elevator on every floor. And they are also able to move around on that floor in a very free manner. “They do have their own office somewhere. Mostly it's on the floor where the IT department is, but they're barely in their office because they're constantly sitting in other people's offices to communicate, collaborate, bring together and enable people – riding the elevator up and down. ... “Business fluency and an understanding of how a business works, as well as the ability to have a holistic perspective on a complex problem, is crucial. It is important to not only look at one aspect, but also consider how that aspect might influence another aspect. That is also something that enterprise architects are trained for like nobody else. Therefore, I believe that the success of holistic sustainability will be a discipline of enterprise architecture.”


Achieving Scalable, Agile, and Comprehensive Data Management and Governance

“Data governance in general is fairly uneven,” he explained. “In terms of protecting sensitive data, there’s been improvement, though. Organizations have been more willing to shut down risky programs that may expose sensitive data even at the expense of losing competitive advantage rather than run afoul of regulations.” As a sign of this improvement, he added, 73% of survey respondents said they were at least somewhat successful at meeting their regulatory and compliance objectives. Another key concern Stodder discussed was the highly distributed nature of today’s data environment. “Creating data silos goes hand in hand with data democratization,” he said. “Forty-one percent of our survey respondents said managing data silos was one of their top three challenges.” To address this, he said, many are turning to solutions such as data virtualization, data fabrics, or data meshes. He also added that the research showed roughly 30% as already using data virtualization and about the same number planning to.


Global Cyberespionage Operations Surging, Microsoft Warns

Microsoft reports that when it comes to cyber operations and intelligence gathering, nominal allies target each other. Despite last month's meeting between Russian President Vladimir Putin and North Korean hereditary dictator Kim Jong Un, Pyongyang continues to run Moscow-focused espionage operations, especially focused on "nuclear energy, defense and government policy intelligence collection." Alongside the risk posed by nation-state groups, the threat posed by criminals also continues to intensify. "Ransomware‐as‐ a-service and phishing-as-a-service are key threats to businesses, and cybercriminals have conducted business email compromise and other cybercrimes, largely undeterred by the increasing commitment of global law enforcement resources," Burt said. Microsoft said that from September 2022 through July, it saw the number of human-operated or "hands on keyboard" ransomware attacks double compared to less sophisticated, fully automated attacks. Since last November, it said, it saw the number of security incidents that appeared to lead to data exfiltration double.



Quote for the day:

''Success is a state of mind. If you want success start thinking of yourself as a sucess." -- Joyce Brothers

Daily Tech Digest - October 06, 2023

Cloud infrastructure spending is growing

Although I love to be right about the strong cloud spending, that does not mean it’s suitable for all enterprises. Indeed, the trend will be to overspend, even after net-new finops deployments that closely monitor where the dollars are spent. We must focus on accountability, automation, and discipline around allocating and paying for cloud resources. I suspect many cloud deployments are hugely underoptimized and need a tune-up. Even though some of this shared infrastructure spending is unavoidable, CIOs need to review how the spending occurs and look for opportunities to save dollars without reducing the value generated by these systems. I suggest companies consider all other options, such as bringing some processing into enterprise data centers. Those prices have been falling while they have been stable or rising on the public cloud side. Also, many systems function in isolation and don’t benefit much from existing within a public cloud. Simple storage is one example, and many enterprises are putting those systems on-premises these days.


BAs are responsible for creating new models that support business decisions by working closely with finance and IT teams to establish initiatives and strategies aimed at improving revenue and/or optimizing costs. Business analysts need a “strong understanding of regulatory and reporting requirements as well as plenty of experience in forecasting, budgeting, and financial analysis combined with understanding of key performance indicators,” according to Robert Half Technology. ... Business analysts need to know how to pull, analyze and report data trends, share that information with others, and apply it to business goals and needs. Not all business analysts need a background in IT if they have a general understanding of how systems, products, and tools work. Alternatively, some business analysts have a strong IT background and less experience in business but are interested in shifting away from IT into this hybrid role. The role often acts as a communicator between the business and IT sides of the organization, so having extensive experience in either area can be beneficial for business analysts.


AI Needs Data More Than Data Needs AI

While data plays a foundational role in AI, the reverse is not true. Data doesn't inherently need AI to exist or be valuable. Data, in various forms, has been collected and analyzed for centuries without the need for sophisticated AI algorithms. Data on its own can provide valuable insights and inform decision-making processes. Therefore, organizations should not blindly chase the AI hype at the cost of ignoring the importance of data management and data quality. The role of AI is to take the computation and insights of good quality data to the next level and not necessarily attempt to fix the decades-old data management processes. ... While AI relies heavily on data for its operation and evolution, data can benefit from AI in several ways. Data Management: AI can help automate data management tasks, making it easier to process, clean and organize large datasets. Predictive Insights: AI can uncover patterns and insights in data that may not be immediately apparent to humans, enhancing the value of the data.


Enterprises see AI as a worthwhile investment

Despite prior industry research indicating that 90% of AI initiatives fail to produce substantial ROI and roughly half never leave the prototype stage, the overwhelming majority of respondents to this survey (92%) find business value from their models in production and 66% feel their models have delivered results that are outstanding or exceed expectations. Common use cases for AI among these leading-edge organizations include personalizing the customer experience, fraud detection, optimizing sales and marketing and improving real-time decision making. Their success of this group offers a basic roadmap that other organizations should consider when developing their own best practices, including: Approach: A majority of responding organizations have a robust, defined approach and a dedicated team for monitoring ML models in production. In fact among larger enterprises, 71% have at least 100 people working in ML while over half have more than 250. 


5 Strategies for Cloud Security in Health Care

Adopting data security in the cloud doesn’t mean merely uploading patient data to S3 and enabling encryption. There are many security controls that need to be in place before a single patient record is migrated. For instance, there is particular concern about data security on medical devices and wireless body area networks (devices that are embedded in a patient’s body). Obviously, it’s vital to secure such devices from exploits. When running services on the cloud, you should review all relevant data privacy considerations and encryption controls, including data encryption, public-key encryption, identity-based encryption, identity-based broadcast encryption and attribute-based encryption. Then adopt a framework for achieving secure and controlled identity access using federation (like OpenID Connect, which is not the same as OpenID, or SAML). Finally, you should ensure that monitoring and audit controls are in place to maintain confidentiality. You should also have an incident response plan in place to handle crisis scenarios in the event of an incident. 


Financial Institutions Turn to AI and Cloud to Solve Data Challenges

In data management, the potential uses of GenAI, powered by large language models, has been recognised by many financial institutions, including State Street. For instance, it can help in the cross-mapping of datasets, the classifying of data and more generalist applications such as summarising reports and responding to plain English inquiries. ... The Alpha platform uses GenAI with Snowflake as a strategic partner providing the data foundation of the platform. Snowflake’s cloud-native architecture streamlines data sharing and governance, enables faster time to market for data-centric applications, and offers a rich environment of AI and machine learning-based capabilities for data scientists, quants and engineers. “Every few years, the technology landscape re-sets, creating a small window of opportunity that in turn enables a giant leap in innovation; GenAI is the opportunity that will define the new set of industry leaders over the next decade,” State Street Executive Vice President and Chief Architect Aman Thind tells A-Team Group.


Building data center networks for GenAI fabric enablement

Building GenAI data centers from a network perspective differs greatly from traditional data center buildouts -- or even those that were designed to support high-performance computing (HPC). ... After all, the pace of a GenAI application is only as fast as its slowest component. If properly built, the network can be eliminated as a potential performance bottleneck. Building a highly scalable network is also key to GenAI data centers as it enables future growth capacity. Network switch fabrics must include hardware that can expand horizontally and vertically, as well as use network OSes on switching hardware that include advanced features, such as packet spraying, load awareness and intelligent traffic redirection. These features provide automated rerouting of traffic within the network and between GPU processing units that may become overloaded. ... Early GenAI adopters have concluded that the use of multisite or micro data centers is the best option to accommodate this level of density. And, yet again, this puts pressure on the network interconnecting these sites to be as high-performing and resilient as possible.


Breach Roundup: Still Too Much ICS Exposed on the Internet

Apple responded to an actively exploited zero-day flaw in iOS and iPadOS on Wednesday with the release of security patches. The identified vulnerability, tracked as CVE-2023-42824, exists in the kernel and may allow an attacker to elevate privileges. "Apple is aware of a report that this issue may have been actively exploited against versions of iOS before iOS 16.6," the company said. The update also addresses CVE-2023-5217, a WebRTC component issue. WebRTC is an open-source project that supports real-time computing between browsers and mobile applications, powering uses such as video and voice calling. ... Sony Interactive Entertainment alerted around 6,800 individuals about a cybersecurity breach. The intrusion resulted from an unauthorized party exploiting a zero-day vulnerability, tracked as CVE-2023-34362, in the MOVEit file transfer platform. This critical-severity SQL injection flaw, leading to remote code execution, was used by the Clop ransomware gang in widespread attacks in late May. 


8 Ways to Combat Ageism in Your Job Search

Workplace experts say candidates can combat this by showing what efforts they've made to quickly pick up new skills and show enthusiasm for future learning. That might mean enrolling in extra training courses, getting new certifications and highlighting them in your résumé or interview, North said. Younger workers may need to show that they have taken proactive measures to learn new job skills they may lack. Older workers may want to show that they can keep up with fast-paced environments and various tech tools. ... "If you don't have to input this information, don't volunteer it," he said, adding that phrases like 40-plus years of experience also may not be best. Instead, stick to your skills and experiences. If you lack experience in one area, show how your skills are transferrable for this specific job. You can also be clear about any kind of transition, like a career change, or gap in employment by placing it in an executive summary section at the top of your résumé, Freeman said. Quantify your previous work's impact with numbers or qualify it by explaining how it affected the results.


Ransomware Crisis, Recession Fears Leave CISOs in Tough Spot

With a new ransomware target being attacked every 14 seconds, organizations must prioritize ransomware prevention. With its developing sophistication, mitigating ransomware is increasingly more challenging. There's no silver bullet to eradicate attacks, and having to operate in a tight market adds a layer of complexity. CISOs and security leaders must focus on the best return on investment while building out a multilayered approach for improving their overall IT security. One strategy to accomplish this is managing attack vectors using encrypted channels with preventive technologies that can stop adversaries before they have a chance to compromise networks or while they are executing their multistep campaigns. ... Ransomware gangs also take advantage of legitimate websites encrypted with SSL/TLS to look secure, but have been infected with drive-by downloads. And cybercriminals leech onto browser vulnerabilities that can lead to infection when the entry point is encrypted, allowing encrypted threats embedded with malicious payloads to go unnoticed.



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins

Daily Tech Digest - October 05, 2023

AI and Overcoming User Resistance

If users are concerned, and even worried about AI, it could lead to user resistance, which is a dynamic that IT pros are familiar with from their history of implementing new systems that alter business processes, require employee retraining, and may even change employee jobs. So, are process change and user resistance any different when you introduce AI? I would argue yes. You’re not just retraining an employee on a new set of steps for processing an invoice or taking an order. You’re actually introducing an automated thinking process into what an employee has been doing. Now, technology is going to make or recommend decisions that the employee used to make. This can lead to employees experiencing a loss of empowerment and control. ... This is exactly the “sweet spot” that companies (and IT) should aim for with AI projects: an environment where everyone sees beneficial value from AI, and where no one feels disenfranchised. This is an achievable environment if users are engaged early in business process redefinition and in how AI will work. 


Eyes everywhere: How to safely navigate the IoT video revolution

Users are rightfully wary of bringing even more cameras into their homes and offices. The good news is that they, too, can protect their camera-enabled devices with some simple steps. First, customize. This includes changing default usernames and passwords, updating the device’s firmware and software, and staying informed about the latest security threats. This is a simple yet effective way to create a barrier between yourself and would-be hackers. Next, take it to the edge. Processing and storing data at the edge instead of the cloud is another surefire way to protect your endpoints. After all, by storing the information under your own lock and key, you can be sure about who can access it and how. Users also benefit from reduced latency by storing the information closer to home, which is particularly important with heavy video feeds. Finally, buy trusted brands. Attack surfaces are only as strong as their weakest link. So, chose companies that have a proven track record when it comes to privacy and security. 


Why HTTP Caching Matters for APIs

In some caching strategies, especially for dynamic resources, the cache can store not only the complete response but also the individual elements or changes that make up the response. This approach is known as “delta caching” or “incremental caching.” Instead of sending the complete response, delta caching sends only the changes or updates made to the cached version of the resource. ... Delta caching is particularly useful for scenarios where resources change frequently, but the changes are relatively small compared to the complete resource. For example, in a collaborative document editing application, delta caching can be employed to send only the changes made by a user to a shared document, instead of sending the entire document every time it is updated. ... Caching enhances application resilience by reducing the risk of service disruptions during periods of high demand. By serving cached responses, even if the backend servers experience temporary performance issues, the application can continue to respond to a significant portion of requests from the cache. The caching layer acts as a buffer between the backend servers and the clients.


Author Talks: How to speak confidently when you’re put on the spot

People become nervous for many reasons. More than 75 percent of people report being nervous in high-stakes communication, be it planned or spontaneous. Past experience could be a factor, as well as high stakes and the importance of the goals you’re trying to achieve. Those of us who study this at an academic level believe that the nervousness is wired into being human. We see this across all cultures. We see it develop typically in the early teen years and progress from there. There’s an evolutionary component to it. One of the most helpful tips is normalizing the anxiety that you feel. You’re not alone. ... My anxiety management plan has three steps. The first thing I do is hold something cold in the palms of my hand before I speak. That cools me down. Secondly, I say tongue twisters to warm up my voice and also to get myself in the moment. Third, I remind myself, “I am in service of my audience. I am here to help them.” That really gets me other-focused rather than self-focused. That’s my anxiety management plan. I encourage everybody to find a plan that works for them.


Dell customizes GenAI and focuses on data lakehouse

Being able to fine tune as well as train generative AI is a process that relies on data, lots and lots of data. For enterprise use cases, that data isn’t just generic data taken from a public source, but rather is data that an organization already has in its data centers or cloud deployments and is likely also spread across multiple locations. To help enable enterprises to fully benefit from data for generative AI, Dell is building out an open data lakehouse platform. The data lakehouse concept is one that was originally pioneered by Databricks, as a way of enabling organizations to more easily query data stored in cloud object storage based data lakes. The Dell approach is a bit more nuanced in that it is taking a hybrid approach to data, with a goal of being able to query data across on-premises as well as mutli-cloud deployments. Greg Findlen, senior VP data management at Dell explained during the press briefing that the open data lakehouse will be able to use Dell storage and compute capabilities as well as multi-cloud storage. 


Don’t try running with data before you can walk

In South Africa, data governance tends to be a grudge investment based on regulatory issues. However, organisations that don’t do the basics well, and don’t have mature data governance and established frameworks in place, may well find they are spending on analytics technologies that don’t live up to expectations. What stands in the way of getting governance right? Firstly, it’s not easy. It involves all stakeholders across all domains. It may require a mindset change, and users may need to learn to use new technology. Secondly, it can be expensive, and it may take time before the organisation sees the value of it. One of the biggest problems is that the value of data governance investments is difficult to quantify in monetary terms. ... Data products should be supported by the entire CDO capability – including the CDO, data owners and data stewards – as well as IT, to ensure the data products will add the required business value. Owners and stewards need to identify and curate the required data for the products, while also ensuring good quality data and metadata management to make it more usable for broader business.


Yes, Software Development is an Assembly Line, but not Like That

Manufacturing engineers produce assembly lines and manufacturing processes that can produce those units of value. Software engineers are largely the same, also producing systems and processes that deliver units of value. The manufactured widget of software is actually the discrete user interactions with those features and pieces of software, not the features themselves. The assembly line in software engineering isn’t, as many think, the engineers producing features. ... Systems like Total Quality Management, which are focused on driving a cultural mindset of continuous improvement and an entire company focused on providing very low defect rates, easily translate to customer satisfaction in software organizations. Just to pick on TQM a bit, if we were to adapt it to software, we would focus on the number of times users are impacted by a defect more than the number of open bugs. Instead of tracking the number of defects and searching for more, we would be tracking the number of users who either failed to receive the promised value from the product or had severely diminished value.


Cloud Services Without Servers: What’s Behind It

“The basic idea of serverless computing has been around since the beginning of cloud computing. However, it has not become widely accepted,” explains Samuel Kounev, who heads the JMU Chair of Computer Science II (Software Engineering). But a shift can currently be observed in the industry and in science, the focus is increasingly moving towards serverless computing. A recent article in the Communications of the ACM magazine of the Association for Computing Machinery (ACM) deals with the history, status and potential of serverless computing. Among the authors are Samuel Kounev and Dr. Nikolas Herbst, who heads the JMU research group “Data Analytics Clouds”. ... “NoOps” is the first, which stands for “no operations”. This means, as described above, that the technical server management, including the hardware and software layers, is completely in the responsibility of the cloud provider. The second principle is “utilisation-based billing”, which means that only the time during which the customer actively uses the allocated computing resources is billed. 


7 sins of software development

Some software development issues can be fixed later. Building an application that scales efficiently to handle millions or billions of events isn’t one of them. Creating effective code with no bottlenecks that surprise everyone when the app finally runs at full scale requires plenty of forethought and high-level leadership. It’s not something that can be fixed later with a bit of targeted coding and virtual duct tape. The algorithms and data structures need to be planned from the beginning. That means the architects and the management layer need to think carefully about the data that will be stored and processed for each user. When a million or a billion users show up, which layer does the flood of information overwhelm? How can we plan ahead for those moments? Sometimes this architectural forethought means killing some great ideas. Sometimes the management layer needs to weigh the benefits with the costs of delivering a feature at scale. Some data analysis just doesn’t work well at large scale. Some formulas grow exponentially with more users. 


Organizations grapple with detection and response despite rising security budgets

For better understanding and evaluation, the study was able to categorize the responding organizations into "secure creators" and "prone enterprises." The grouping was done on the basis of the number of solutions used, the adoption of emerging technologies, and the use of technologies to simplify their automation environments. The study found that secure creators are more satisfied with their approach to cybersecurity, experience fewer cybersecurity incidents, and can detect and respond to incidents quicker. About 70% of them are early adopters of emerging technologies. The secure creators are also more focused on extracting the most value from specific advanced solutions, with 62% already using or in the late stages of implementing AI/ML solutions, as compared to only 45% of the prone enterprises. "When it comes to technology, the more clutter an organization has in its armory, the harder it is to pick up signals and get on top of issues quickly," Watson said.



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie

Daily Tech Digest - October 04, 2023

The Big Threat to AI: Looming Disruptions

As if semiconductor supply chain issues weren’t enough of a problem for AI production, other supply chains are piling on the challenges. "AI is software and open-source code makes up 90% of most codebases, which means the open source software supply chain has just as much, if not more, impact on AI production than regulated hardware components,” says Feross Aboukhadijeh, founder and CEO of Socket. The impact is potentially widespread given there are many open source AI models and tools on the market today and more are coming. ... There are numerous efforts afoot to relieve these concerns and secure a prime slice of the AI market pie. For what corporation does not envy Nvidia right now? “Many countries are trying to increase their piece of the global supply chain capacity and/or to onshore as much as possible through subsidies and other incentives. This has spurred significant investment and activity, but it remains to be seen whether these investments will address the supply chain problems in a timely or appropriate manner,” says Almassy.


When to Scale and When Not to Scale

Scaling is a nuanced decision in the agile journey, bridging the demands of complexity and rapid market needs. While the lure of scaling promises greater coordination, efficient handling of product intricacies, and swifter market responses, it's pivotal to approach it judiciously. It's not just about expanding teams or implementing frameworks; it's about recognizing when the product's complexity or market dynamics truly warrant a scaled approach. On the flip side, scaling without a clear strategy can introduce unforeseen challenges. From the inadvertent hiring of too many junior roles to the formation of functional silos, scaling can sometimes complicate rather than streamline. Additionally, foundational elements, such as a firm grasp of agile practices and automation, can determine the success of scaling endeavors. In essence, scaling is a tool in the agile toolkit—powerful when used correctly but potentially counterproductive if misapplied. Organizations must reflect on their unique scenarios, understanding both the promises and pitfalls of scaling, to ensure they chart a path that genuinely enhances agility, efficiency, and value delivery.


From Big Data to Better Data: Ensuring Data Quality with Verity

High-quality data is necessary for the success of every data-driven company. It enables everything from reliable business logic to insightful decision-making and robust machine learning modeling. It is now the norm for tech companies to have a well-developed data platform. This makes it easy for engineers to generate, transform, store, and analyze data at the petabyte scale. As such, we have reached a point where the quantity of data is no longer a boundary. Yet this has come at the cost of quality. ... Poor data quality in Hive caused tainted experimentation metrics, inaccurate machine learning features, and flawed executive dashboards. These incidents were hard to troubleshoot, as we had no unified approach to assessing data quality and no centralized repository for results. This delay increased the difficulty and cost of data backfills. The lack of centralization in data quality also made the data discovery process inefficient, making it hard for data scientists and data engineers to identify trustworthy data.


AI vs software outsourcing: An opportunity or a threat?

As AI becomes more widespread, the question is whether programmers write code themselves or have chatbots write it. Customers usually expect quality. If AI can help deliver this quality faster, why not? Look, everyone knows that there is a programming language called Java. There are Apache Commons libraries. You can Google it, but can you do something with it? Can you bring value to the business? This is the point. LLM models are a tool, just like a library or a framework. However, it has other capabilities that need to be mastered and used to bring value. It will be a long time before AI can replace developers because there will always be something that needs to be fixed. Either it's an error in the code or something wrong with the configuration. For example, if a bot has already written code that seems to work, but an error appears. The developer can spend little time writing the code but later spends more time looking for the error. Let's take GitHub Copilot. Programmers note that the acceptance rate of suggestions from Copilot is up to 40%. 


Why all IT talent should be irreplaceable

“Great employee” is easy to type. It’s less easy to define. Here’s a short list to get you started. Scrub it by discussing the question with your leadership team. The habit of success: Some employees seemingly don’t know how to fail. Give them an assignment and they’ll figure out a way to get it done. Competence: As a general rule, it’s better to apologize for an employee’s bad manners than for their inability to do the work. Without competence, employees with a strong success habit can do a lot of damage by, for example, creating kludges instead of sustainable solutions. Followership: Leadership is a prized attribute for employees to have. Prized, that is, if they’re leading in their leader’s direction. Otherwise, if you and they are leading in different directions, all your prized leaders will do is generate conflict and confusion. Followership is what happens when they embrace the direction you’re setting and make it their own. Intellectual honesty: Some employees can be persuaded with evidence and logic. Others trust their guts instead. That’s a physiological error. You want people who digest with their intestines but think with their brains.


Do you need both cloud architects and cloud engineers?

We need a collaborative approach with both disciplines. One cannot function properly without the other. For example, I cannot design multicloud-based systems that define different usages for different cloud services on different clouds. ... Many assume that the engineering tasks are the easiest part of the journey to the cloud. After all, if the cloud architect is good, the configuration should work, and it’s just a matter of using sound AI tools to carry out deployment. Even worse, some companies are working just with engineers and hiring specific skills. The company may pick a cloud brand and hire security, application, data, and AI engineers in that cloud platform. They assume that this specific cloud platform is the correct and optimized platform, which will usually cause trouble. Oh, the solutions may work, but it could cost 10 times more to operate. Not surprisingly, these companies have an underoptimized architecture since they’ve given zero consideration to architecture or the use of cloud architects. AI won’t save you from needing a good architecture and a good set of engineering disciplines. 


What IT needs to know about energy-efficiency directives for data centers

New regulations springing up in various regions will be among the drivers of data center sustainability in the months ahead. There are two main groups of regulations emerging that affect data center operations, according to Jay Dietrich, research director of sustainability at Uptime Institute. One is financial reporting modeled on the Task Force for Climate-related Financial Disclosures (TCFD), which requires reporting on energy consumption and efficiency and associated greenhouse gas (GHG) emissions. The other is the European Energy Efficiency Directive (EED), which requires an energy management plan, an energy audit, and reporting of operational data. In addition, there are voluntary, country-specific standards and siting requirements for data center efficiency and operations in various countries around the world, Dietrich says. A current example of a TCFD-related regulation is the E.U. Corporate Sustainability Reporting Directive (CSRD), with reporting requirements rolling out from large to small enterprises beginning in 2025 and continuing until 2028.


What does leadership in a hybrid world look like?

Firms want their best people to stick around and give more of themselves. Studies have shown that improved employee collaboration and alignment with a common purpose is key to achieving that. But what is the best way to make that happen in the way we now wish to work and live our lives? Some suggest that the emergence of generative AI and new work tools can improve productivity regardless of the workplace setting. But perhaps a different, more human, approach is needed? The profound loosening of relationships that employees have with their firm and one another, requires a similarly fundamental reimagining of the role of the leader itself. Ultimately, this will not come through new technology, systems, processes, or HR policy (however well-crafted), but through the actions and behaviours of credible and engaging people managers. Firms need to re-establish a sense of cohesion and that needs people who are exceptional good at doing just that. Businesses can’t just issue ultimatums or mandates; they need a leadership approach that “coheres” employees to feel less remote from one another and the firm.


Six skills you need to become an AI prompt engineer

Prompt engineering is much more of a collaborative conversation than an exercise in programming. Although LLMs are certainly not sentient, they often communicate in a way that's similar to how you'd communicate with a co-worker or subordinate. When you're defining your problem statements and queries, you will often have to think outside the box. The picture you have in your head may not translate to the internal representation of the AI. You'll need to be able to think about a variety of conversational approaches and different gambits to get the results you want. ... While you might not necessarily be expected to write the full application code, you will provide far more value if you can write some code, test your prompts in the context of the apps you're building, run debug code, and overall be part of the interactive programming process. It will be much easier for a team to move forward if the prompt engineering occurs as an integral part of the process, rather than having to add it in and test it as a completely separate operation.


The Cost Dynamics of Multitenancy

Isolating tenants with infrastructure has a higher initial cost, especially as you discover the right size for tenant workloads. Once you understand the cost for a tenant, it provides a very stable cost per tenant. Any unevenness in the cost profile represents a choice of timing. For example, if you use containers per tenant, you must decide when to commission your next cluster. Software-based multitenancy has an early advantage as it keeps the initial product price low. The marginal economics of onboarding a tenant are very low — almost zero. There comes a point when the initial design can no longer manage the load. The first port of call is vertical scaling — adding more power to the infrastructure to handle the load. This increases the cost per tenant but enables further tenants to be added. Eventually, you run out of vertical scaling options and look to horizontal scaling. This requires more investment as you need to handle load balancing, re-architect stateful interactions and introduce technologies such as shared cache.



Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal