Daily Tech Digest - March 12, 2023

What is DevSecOps and How Does it Work?

DevSecOps is a methodology that emphasizes integrating security practices into the software development process. The idea is to promote collaboration and communication among development, security, and operations teams to incorporate security throughout the entire software development lifecycle. DevSecOps is a combination of three words: Development + Security + Operations. The approach acknowledges that security is an integral part of the software development cycle, and we should integrate it right from the beginning instead of treating it as an afterthought. ... Incorporate security practices as early as possible in the software development lifecycle. It is because the entire DevSecOps team is collectively responsible for ensuring the security of your system. By implementing security from the beginning, your team can discover and fix security threats early, providing smooth delivery cycles. ... DevSecOps can significantly increase your chances of success by ensuring the software you develop is free of any issues. However, getting it right is a real challenge.


The AI Act: What Does It Mean for Patenting Products

The Act lists three categories of AI systems. The first one relates to systems associated with an ‘unacceptable risk’. It includes systems which seek to manipulate vulnerable persons, social scoring and the use of real-time biometric data, such as face recognition (with limited exceptions for law enforcement). These systems are simply prohibited in the EU. The second category is ‘high risk’. There are two main parts to this: systems which are key to safety and systems which could potentially be socially damaging, such as those systems where bias could be particularly harmful. For instance, AI systems associated with access to opportunities in life, such as education, employment, credit scores, and public services, fall into this category. The Act is intended to ensure that everyone is treated fairly and not subjected to prejudice or discrimination baked into an AI system. The AI act introduces additional burdens in bringing such systems to market if they have an AI element.
 

Don’t Get Caught Off Guard: A Roadmap to Cyber Resilience

The term cybersecurity and cyber resilience have been used interchangeably by many. While both share the same objective, implementation is where they differ. While cybersecurity emphasizes on deploying strategies that prevent cyber-attacks from penetrating the systems, cyber resilience is a holistic approach that encompasses resisting, navigating, and surviving the entire lifecycle of an attack. In short, cyber resilience is a broader scope of cybersecurity. According to the World Economic Forum’s 2022 Global Cybersecurity Outlook, the average cost of a corporate breach is $3.6 million per incident, and it takes roughly 280 days to identify and address a penetration. This survey in itself calls for the need for a game plan. Building defenses along the perimeters, and following a siloed approach are methods of the past years. Considering the massive attack landscape that currently exists, business leaders must steer towards a holistic cybersecurity strategy that involves identifying and securing all vulnerable endpoints. 


It’s a weird, weird quantum world

Shor’s work was the first to show that a quantum computer could solve a real, practical problem. His talk set the seminar abuzz, and the news spread, then became conflated. Four days after his initial talk, physicists across the country were assuming Shor had solved a related, though much thornier problem: prime factorization — the challenge of finding a very large number’s two prime factors. ... “It was like the children’s game of ‘telephone,’ where the rumor spread that I had figured out factoring,” Shor says. “And in the four days since [the talk], I had!” By tweaking his original problem, Shor happened to find a similar quantum solution for prime factorization. His solution, known today as Shor’s algorithm, showed how a quantum computer could factorize very large numbers. Quantum computing, once thought of as a thought experiment, suddenly had in Shor’s algorithm an instruction manual for a very real, and potentially disruptive application. His work simultaneously ignited multiple new lines of research in quantum computing, information science, and cryptography.


Why You Should Give a Damn About Software Design

The Factory Design Pattern is a programming concept that allows you to create objects in a more flexible and controlled way. Imagine you need to create many products for your store, but each object is created differently based on some conditions. For example, if you were building cars, you know that they will all require at least 4 wheels, a gas tank, an engine, and so forth, but every car will have a unique color, shape, year, and model. Instead of creating each car entirely from scratch, you can build a blueprint to determine exactly how each car should be engineered. No need to keep returning to the drawing board. The factory has a method that takes in some parameters and based on those parameters, it creates the appropriate object and returns it to you. This way, you can create many objects easily and you can change how the objects are created by changing the factory’s method, instead of changing the entire program.


Good Things Happen When DevSecOps and Cloud Collide

Cloud-based data is accounting for 39% of successful cyberattacks. Containerized applications, which have been a boon to both migration and management can also lead to vulnerabilities – which is fitting for security to be cited as a top concern for more than half of the organizations surveyed. ... The idea is simple: You must find a way, a process, a method, and the right partners to help secure all workloads across any cloud environment, regardless of the platform or the amount of data and application real estate needed. By establishing this model, organizations are able to create a fundamental layer of protection against the ever-evolving threat of cybercriminals. Take one of our large banking customers, for example, who runs critical applications on AWS with stringent security and compliance requirements. We implemented a secured framework to protect their applications running on modern, cloud-native services like containers and Lambda functions using DevSecOps principles and cloud-native SIEM solutions. 


Every third employee in IT will soon be a gig worker- are you one of them?

For enterprises, engagement with the gig workforce ensures cost savings, flexibility of an ad-hoc, project-based working model that can be scaled or descaled quickly, enable quick onboarding, and access to highly skilled, niche talent. However, engaging with gig workers comes with its own set of challenges, including concerns around data security, IP theft, access management, cultural orientation, etc. These challenges span across planning, onboarding, execution, and payment phases in the lifecycle of gig workers. The study reveals that more than 70 per cent of CXOs feel that onboarding and execution are the two difficult yet crucial phases, addressing which can enable widespread adoption of the gig economy model. Technologies such as cloud, artificial intelligence (AI), and cybersecurity are being leveraged to address such challenges in a transparent and productive way. Cloud technology, which enabled the seamless transition to remote work, will be critical in addressing the challenges of the gig economy. 


Cyber Resilience More Than A Software Problem

From our unique position in the BIOS of millions of active devices, we can see security applications from the world’s leading security companies, running in some of the most sophisticated security environments by some of the strongest cyber teams and still be operating at 60 to 70 per cent resiliency — meaning they are only installed, running and healthy across 60 to 70 per cent of the devices where they are required for compliance. Another way to think about that is $0.30 to $0.40 of every dollar spent could be wasted if those controls are not healthy and working to protect the user. That complexity is what we need to tackle for certain. And understanding that the end result will never be zero risk — resiliency in spite of complexity is what Absolute Resilience does that no one else can do. We leverage our unique Persistence technology, already in the device itself, to self-heal these applications automatically — to restore, repair, or even reinstall an application and help to close that seemingly insurmountable gap.


Enterprise Architecture Vs Solution Architecture – Let the Comparison Begin

Enterprise architecture (EA) in an organization is often defined as the organizing logic for business processes and infrastructure. The primary purpose of creating enterprise architecture is to ensure that business strategy and IT are aligned. Enterprise architecture should make it possible for an organization to achieve traceability from the business strategy down to the technology used to implement that strategy. Enterprise Architecture is a journey which acts as a collaboration force among:Business planning (strategic) such as goals, visions, strategies, capabilities and governance principles. ... Solution architecture is a process of architecting, designing, and managing the technical and operational architecture of a solution to meet specific business needs of a business unit of an organization. ... The characteristics of Solution Architecture are:Modular Design – Architecture should always follow a modular component-based designs rather than monolithic blocks of system for easier management and change


How banks can use seven levers to modernize their core systems

To simplify the CBS, banks can explore three options focused on the removal and carve-out of unused or unneeded modules. Rationalize customizations or modules: Banks should analyze unused modules within the CBS code base, screen components, and evaluate other business logic and remove if necessary. This analysis includes the identification of unwanted customizations of an off-the-shelf platform. McKinsey analysis shows that only 10 percent of existing core-banking-system customizations are regulatory driven or business critical. Carve out master-data components: In most cases, customer data is stored within the core banking system. However, requests directed to core banking for basic products, customer data, and pricing data create significant workloads and costs. To simplify, banks can carve out such functionalities and data, allocating them to dedicated master databases and thus reducing the overall load of the CBS.



Quote for the day:

"Good leadership consists of showing average people how to do the work of superior people." -- John D. Rockefeller

Daily Tech Digest - March 11, 2023

4 Reasons to Outsource Large IT Projects During Economic Headwinds

Great professional services teams accumulate best practices over time and will bring complementary skill sets into the business they’re partnering with. Shared knowledge helps grow the skillset of your internal team, and enables them to contribute more meaningfully to the success of your business. Your employee satisfaction can even increase from personal and professional progress felt when learning new technology, frameworks, or languages throughout major IT projects developed in partnership with external experts. This aspect cannot be overlooked, given that 91% of employees report being frustrated with inadequate workplace technology and 71% consider looking for a new employer as a consequence. Expert teams have the depth of knowledge on a breadth of tools that help save tremendous time and many headaches by creating ​efficient, automated workflows. Ensure that your team gets the opportunity to work directly with your outsourced development team to facilitate knowledge sharing.


Migrating to the Cloud: Is It as Intimidating as It Appears?

Being Cloud Native is often considered crucial for business success in the current business landscape. However, the perception of becoming “Cloud Native” as a drastic change for a business might not necessarily be accurate. In this article, we will delve into the concept of Cloud Migration and its effects on the IT support infrastructure of your business. ... “Cloud Native Services” are those services and infrastructure specifically designed to run on cloud platforms, hosted and maintained by Cloud Providers. These services can include a variety of offerings, such as virtual machines (VMs), application servers, VPNs, load balancers, routers, databases, and disk storage. They can be divided into three main categories: compute services, network services, and storage services. ... Throughout the entire project, it is essential to continuously monitor and manage the cloud environment to ensure that it remains secure, cost-effective, and aligned with the business objectives. 


DevOps as a Graph for Real-Time Troubleshooting

A lot of data for observability is interrelated, but our current tools don’t allow us to view metrics, logs and distributed traces as connected sources of information. These data types are often collected in siloes, and correlation of the data is done manually. For example, to know if a spike in a metric on one service might have something to do with a spike on another service, we often search through the metric charts to find the correlation. You need a solution that connects operational data from the start. ... Digital Enterprise Journal’s recent State of IT Performance Report found that change is the largest source of production issues. 75% of all performance problems can eventually be traced back to changes in the environment. When simple configuration errors can cause a domino effect, there’s a broader lesson to be learned. If you’re not capturing code or configuration changes as part of your observability strategy, it’s time to close that gap.


New variant of the IceFire ransomware targets Linux enterprise systems

Known up to now to target only Windows systems, the IceFire malware detected by SentinelLabs uses an iFire extension, consistent with a February report from MalwareHunterTeam — a group of independent cybersecurity researchers analyzing and tracking threats — that IceFire is shifting focus to Linux enterprise systems. Contrary to past behavior targeting technology companies, the Linux variant of IceFire was observed attacking media and entertainment companies. The attackers’ tactics are consistent with those of the "big-game hunting" (BGH) ransomware families, which involve double extortion, attacks against large enterprises, the use of numerous persistence mechanisms, and evasion tactics such as deleting log files, according to the SentinelLabs report. Double extortion occurs when attackers steal data as well encrypting it, and usually ask for ransom that's double the usual payment.


We chat with ChatGPT itself about the future of AI

AI has already been quite transformative, particularly in the digital marketing industry. How do you think this is going to evolve? One area where AI is likely to have a significant impact is in personalized marketing. By analysing vast amounts of data on individual consumers, AI algorithms can help marketers tailor their messages and campaigns to specific demographics, interests, and behaviours. This can lead to more effective marketing strategies and higher engagement rates. Another area where AI is likely to make a big impact in digital marketing is in automation. By automating routine tasks such as social media posting and email marketing, AI can free up marketers to focus on more strategic activities, such as campaign planning and content creation. Additionally, AI can help improve the accuracy of targeting and reduce wasted ad spend by predicting which users are most likely to convert. Finally, AI-powered chatbots are likely to become increasingly prevalent in the digital marketing space.


What Does the National Cybersecurity Strategy Mean for Public and Private Stakeholders?

Making that shift a reality is going to mean creating incentives. “We must shift incentives so that when entities across the public and private sectors are faced with the trade-offs between easy but temporary fixes and durable, long-term solutions, they have the resources, capabilities, and incentives to consistently choose the latter,” an Office of the National Cyber Director (ONCD) spokesperson said in a statement to InformationWeek. Regulation will be a necessary element in incentivizing this fundamental shift in responsibility. “Our strategy reflects the reality that voluntary measures will not be enough to deliver the cybersecurity posture we need to enable our digital society,” according to the ONCD spokesperson. While new regulation certainly has a role to play, so do other forms of incentive. “Simply adding mandates and regulation could have detrimental economic impacts, promote a ‘bare minimum’ approach to compliance and pass costs downstream. 


Legal Industry Faces Double Jeopardy as a Favorite Cybercrime Target

It isn't just the sensitivity of the data that legal firms handle but also the scope and detail of data that can be dug up by attackers who successfully breach a single firm — especially if it's a large one. One attack can be a one-stop shop for monetizing the data and access stolen from not just one organization, but a whole portfolio of them. "Law firms connect with and support many clients at any given time. Compromising one law firm gives bad actors access to numerous client networks without having to directly reach each one of them," says Michael Tal, technical director for Votiro, a cloud file security firm that works extensively with the legal industry. "Files are the primary form of communication and weaponizing them gives bad actors a sure way to get the clients to open and infect the clients." ... The other attractive element for hackers is that law firms and legal services companies tend to be very soft targets.


IT leadership: How to lead with strategic clarity

Business leaders who have a complete understanding of where their people and activities are situated in their overall strategy have “strategic clarity.” These leaders are invaluable – they can fill their team’s day with meaningful work, burst through information silos, synergize teams across departments, and articulate every activity’s role in achieving their goals. By connecting every department, goal, person, and task to a shared vision, businesses can unlock their full potential. While establishing strategic clarity might sound complicated, with the right framework, it’s quite intuitive. Here’s how you can accomplish this using the strategic clarity map. The strategic clarity map is a simple but powerful diagram that helps IT leaders define their current and desired situations and determine how to move from current to future states. ... The present-to-future bar is split in the middle. What makes the difference between your present state and actualizing your goals is how you put this next part into action.


Responsible AI at Amazon Web Services: Q&A with Diya Wynn

AWS has a broad strategy. We have a commitment to transforming theory into actions. This means how we’re changing and influencing the way that we build our services and the work that I do with customers — engaging and helping them bring that practice to life, operationalize it inside their organizations. We invest in education and training to create a more diverse future workforce. There’s an AI/ML scholarship program that’s bringing in those that typically might have been underrepresented to help them study artificial intelligence and machine learning. We also focus on training and educating those who are part of the product and machine learning lifecycle because they need to understand and be aware of potential areas of risk and how we mitigate them. The last area from a company perspective is about how we invest in advancing the science around responsible AI. We’ve made huge investments and continue to work with institutions, we have scholarships or research grants that are being provided in the way of NSF that are helping to encourage research in the area of responsible AI.


What Are Cloud-Bound Applications?

Broadly, we can group the way an application binds with its surroundings into two categories. Compute bindings are all the necessary bindings, configurations, APIs, and conventions used to run an application on a compute platform such as Kubernetes, a container service, or even serverless functions (such as AWS Lambda). Mostly, these bindings are transparent to the internal architecture, and configured and used by operations teams rather than developers. The container abstraction is the most widespread “API” for application compute binding today. ... Integration bindings is a catch-all term for all other bindings to external dependencies that an application is relying upon. The cloud services also use these bindings to interact with the application, usually over well-defined HTTP “APIs,” or specialized messaging and storage access protocols, such as AWS S3, Apache Kafka, Redis APIs, etc. The integration bindings are not as transparent as the runtime bindings.



Quote for the day:

"Taking charge of your own learning is a part of taking charge of your life, which is the sine qua non in becoming an integrated person." -- Warren G. Bennis

Daily Tech Digest - March 10, 2023

Why So Much Open Source Software Is Vulnerable to Hackers

A high-risk vulnerability is defined by the Cybersecurity Research Center this way, McGuire said: “They take the advisories from numerous (industry) security feeds, analyze them and send them out to our customers. And as part of this analysis, they assign severity scores. When it comes to open source vulnerabilities, they’re using the CVSS scoring system. It (severity) also depends on whether or not there’s an exploit; whether or not there is a fix available; the type of exploit; how easy it is for somebody to go through and actually exploit the application; whether this can be done remotely; and whether you have access to the running instance. So all these (attributes) are taken into consideration for that score. And then that score is what tells us whether or not it’s a high-severity vulnerability,” McGuire said. Jason Schmitt, general manager of the Synopsys Software Integrity Group, said that the report findings underlined the reality of open source as the underlying foundation of most types of software built today. 


Building An Automation Strategy: A Leadership Quandary

A digital transformation is a colossal effort for an organization and it has multiple parts to it. From legacy modernization, cloud migration, hybrid development and enterprise data management to automation and reporting, everything can fall under the purview of digital transformation. Leaders should know when to take a sequential approach and what needs to be done in parallel for all of these efforts to converge at some point. A digital transformation strategy can't be restricted to the board room alone with outside consultants and CXOs involved, lacking participation from department heads and leaders who are aware of the factors that contribute to inefficiencies and delays. A bottom-up approach to digital transformation is critical, as it can help in identifying priorities, including which departments need automation, the scope of automation for each department, potential use cases, projected returns and more. This requires leaders to spend time at a grassroots level, explaining their vision and ensuring they have organizational support in turning their digital goals into reality.


Marketing Compliance in 2023: What CPOs Need to Know

Consent refers to the compliance measures taken to abide by laws such as the European Union’s General Data Protection Regulation law and the various privacy laws in the United States. In order to be in compliance with the law, companies must obtain permission to collect consumer data and track consumer activity across the internet. We most often see this play out online in the form of a pop-up that appears when one visits a website asking a visitor to ‘accept’ or ‘decline’ cookies. Another example is the ‘opt in to communications’ box one checks when sharing an email address with a company. Consumer consent is markedly different than preferences because it requires consumers to give permission for companies to communicate with them and track their activity online. Consent varies, however, between different laws; for example, in the E.U., consumers are required to opt in to cookie tracking, whereas in the United States, consumers would need to object. All consent laws, however, require companies to make a consumer’s data available to them upon request.


8 ways to retain top developer talent

DX takes DevOps to the next level. As Guillermo Rauch, CEO and founder of Vercel told me, “Organizations will move from DevOps to dev experience. Great developer experience leads to better developer productivity and improved developer velocity, directly improving your bottom line. Every organization should be thinking, ‘How do I empower my developers to spend more time on the application and product layer while spending minimal time on the backend and infrastructure layer?’” ... Developers create software for two audiences: users and developers — that is, those developers who will work on the product. For users, product excellence is critical. But for developers, excellence inside the product is extremely important as well, and that has big implications for the business using the software. In this sense, DX is an indication of code quality, which says everything about the viability of software. Here, the importance to the business is two-fold. First, systems with good DX are easier to maintain and extend, with software quality a key differentiator between code that can grow and evolve and code that is doomed to degrade and decay.


Stolen credentials increasingly empower the cybercrime underground

"Unlike most modern organizational security teams, threat actors do not operate in silos, and instead pool resources while learning from one another," the company said. "Flashpoint is finding that adept threat actors and ransomware gangs increasingly share code, in addition to tactics, tools, and procedures—largely thanks to the proliferation of illicit markets." Just like ransomware gangs come and go in what seems like a never-ending cycle of rebranding, illegal markets do, too. While there were several law enforcement takedowns or self-shutdowns of big and long-running cybercrime markets -- SSNDOB, Raid Forums, and Hydra being some notable ones -- others quickly popped up to take their place. Cybercriminals usually maintain alternative communication channels like Telegram, where they can keep each other informed and advertise new alternative markets after one disappears. In fact, just last year Flashpoint recorded 190 new illicit markets emerge. 


Darktrace warns of phishing scam powered by ChatGPT

According to Darktrace, there has been a rise in cybercriminals using ChatGPT to create more personalised and authentic-looking phishing emails in an attempt to breach users’ finances, since the chatbot was released last November, reported The Guardian. However, it’s claimed that there isn’t so much a new wave of attackers targeting businesses and individual users with phishing techniques, as there is a shift in tactics using the Microsoft–backed software. Common features within the emails include “linguistic complexity, including text volume, punctuation and sentence length”, while techniques relying on malicious links in the text are decreasing. “We’re seeing a big shift. ‘Hey, guess what, you’ve won the lottery…’ emails are becoming a thing of the past,” Darktrace CEO Poppy Gustafsson told The Times. “Instead, phishing emails are much more about trying to elicit trust and communication. They’re bespoke, with much more sophisticated language — the punctuation is changing, the language is changing. It’s more about trying to elicit trust.”


5 tips for designing a cloud-smart transformation strategy

Don't just "lift and shift" everything as it is during your transition to the cloud. This approach might seem easy but it risks keeping previous mistakes, blunders, and problems in your program. Instead, reconsider everything. Keep what works, replace what doesn't, and discard the rest. Then migrate only the components that your business requires. We have retired (and continue to retire) many workloads in our transition to a hybrid cloud. The traditional practice of "lift and shift" is giving way to a more methodical and strategic approach to modernization. It's a move motivated by years of hard lessons learned and tears shed during previous cloud implementations. Recognize that workloads are inextricably linked and have highly complex dependencies. You can't just move any job to the cloud at random because that might break something. Even if the long-term goal is to move all workloads to the cloud at the same time, containerization and orchestration provide a useful hybrid option for achieving reasonable levels of flexibility and performance.


Synthetic identity fraud calls for a new approach to identity verification

What can be done to tackle the scourge of synthetic identity fraud? At the industry level, lenders and credit bureaus must come together to develop a standard approach for identifying, classifying and reporting synthetic identities. Targeted businesses also need to share data, as criminals use their synthetic identities at (and often defraud) many different organizations as they build their credit histories. Forming a consortium to share intelligence could bring suspicious patterns of activity to light sooner, likely reducing the risk of massive losses. On an organizational level, a multipronged detection strategy is highly recommended. Enterprises need to implement identity verification solutions that combine online and offline data to comprehensively examine risk signals, such as device behavior biometrics, device identity and reputation, email tenure and reputation, mobile phone tenure and reputation, usage patterns of personally identifiable information, and tenure and activity on social media platforms.


US Intelligence Ranks China as Top National Security Threat

The big-picture challenge with China, the report says, is its "capability to directly attempt to alter the rules-based global order in every realm and across multiple regions, as a near-peer competitor that is increasingly pushing to change global norms and potentially threatening its neighbors." The U.S. intelligence report also singles out Beijing for its willingness to use cyber operations and economic espionage to advance its domestic technology capabilities and knowledge and as a domestic and foreign lever to expand the Chinese Communist Party's "technology-driven authoritarianism globally." The country controls key supply chains - for batteries, critical minerals, pharmaceuticals, less advanced semiconductors and solar panels - which Chinese President Xi Jinping in 2020 said the country wouldn't hesitate to use for economic and political gain if required. The intelligence assessment says that this could include cutting off supply to other countries in a time of crisis.


IT leadership: 3 ways to boost your Generational IQ (GQ)

People need to feel heard. Across generations, people put a high value on recognition and respect. As a leader, make sure you have processes that allow you to listen more than you talk. Create plenty of opportunities to show appreciation and respect through both informal recognition processes as well as through formal rewards programs. Leaders who do enough listening (and ask the right questions) can weave generational preferences into how they acknowledge and recognize individuals. With the huge amount of diversity not only across but within generations, using surveys to identify those preferences is a smart way to test what works. A great example of this is customizing how you recognize your team after a big project goes live. A Millennial may value a gift card to use for a trip, while a Boomer may find more value in having their wisdom documented in an online training course for future team members.



Quote for the day:

"A good leader can't get too far ahead of his followers" -- Franklin D. Roosevelt

Daily Tech Digest - March 09, 2023

Understanding Data Security Posture Management for Protecting Cloud Data

To help organizations protect their data from data loss, a new approach emerged in 2022 in the form of data security posture management (DSPM). Today it is proving to be a critical tool for effective data security because of its laser focus on the data layer. DSPM allows organizations to identify all their sensitive data, monitor and identify risks to business-critical data, and remediate and protect that information. To get a better handle on this new approach and what it does, let’s consider what DSPM is not. ... DSPM’s ability to autonomously discover, monitor, and remediate risk creates an effective tool for an organization’s security posture. Beyond that, your DSPM solution of choice needs to operate in a manner that doesn’t require deployment of agents everywhere. Your DSPM should be easy to get up and running and allow you to quickly realize benefits by mining meaningful amounts of data to deliver visibility into what's going on within your environment from a risk perspective. DSPM solutions are proven to deliver accurate results and offer significant ROI for organizations.


Arctic Wolf CEO on Incident Response, M&A, Cyber Insurance

Many organizations struggle with preparing for a security incident even if they have an internal security team and have procured cyber insurance, Schneider says. Businesses often haven't prepared their systems or documented escalation paths or how their environment is set up, which makes it nearly impossible to quickly get information over to an incident response provider in the event of an attack, Schneider says. "The less time that you're spending on compiling information, the more time you're able to spend on remediating the threat and the less time you've taken between an incident occurring and the beginning of a response," Schneider says. Most companies don't know what they need to have documented or prepared in the event of a security incident and therefore end up reaching out to their insurance provider or incident responder while an attack is taking place to see what questions they have, Schneider says. Although the answers to these questions are relatively static, he says it takes a lot of time to gather the information needed to respond


UK government introduces revised data reform bill to Parliament

“Co-designed with business from the start, this new bill ensures that a vitally important data protection regime is tailored to the UK’s own needs and our customs,” said science, innovation and technology secretary Michelle Donelan. “Our system will be easier to understand, easier to comply with, and take advantage of the many opportunities of post-Brexit Britain. No longer will our businesses and citizens have to tangle themselves around the barrier-based European GDPR [General Data Protection Regulation]. “Our new laws release British businesses from unnecessary red tape to unlock new discoveries, drive forward next-generation technologies, create jobs and boost our economy.” The government added the revised bill will also support increased international trade without creating extra costs for businesses already compliant with existing data protection rules, as well as boost public confidence in the use of artificial intelligence (AI) technologies by clarifying the circumstances in which safeguards apply to automated decision-making.


Municipal CISOs grapple with challenges as cyber threats soar

"The diversity of our business services and the corresponding diversity of systems is unparalleled in that no organization does what our municipal government does," Michael Makstman, CISO for the City and County of San Francisco and co-chair of the Coalition of City CISOs, tells CSO. "We fly planes, we pave roads, we provide public safety services," Makstman says. "We operate one of the largest, if not the largest, trauma centers on the West Coast. We support many legal professionals for some of the largest legal firms in the country. At the same time, we make sure that vulnerable populations have access to food and care. We have an outstanding municipal transportation network. We have buses and subways and our world-famous cable car." ... CISOs of municipal organizations of all sizes are required to deftly handle the politics of the governments they serve and the individual service providers themselves, Hamilton says. CISOs are not always welcomed into agencies that do not directly employ them.


Decoding Digital Twins: Exploring the 6 main applications and their benefits

Although the roots of digital twins go back to NASA’s Apollo program in 1970, the concept of creating digital replicas of physical assets and visualizing/simulating/predicting in a virtual world is extremely suitable for companies that are trying to make Industry 4.0 a reality or are aiming toward future industrial metaverse projects. Make no mistake: While the definition of a digital twin may be straightforward, its applications are numerous. In 2020, we published our first market research on the topic and showcased that there may, in fact, be 200 or more different types of digital twins. The feedback we received from you was that classification helps to ensure apple-to-apple digital twin comparisons, but questions remain about the hotspots of activity. Therefore, as part of our new 233-page Digital Twin Market Report 2023-2027, we classified 100 real digital twin projects along the three dimensions and found six main areas of activity. These six digital twin application hotspots cover two thirds of all digital twin projects we analyzed.


Cloud trends 2023: Cost management surpasses security as top priority

For the first time, since Flexera began its annual survey of cloud decision-makers, security was not the top challenge reported by respondents. As revealed in the Flexera 2023 State of the Cloud Report, released on March 8, 2023, 82% of respondents from across all organizations indicated that their top cloud challenge is managing cloud spend, edging out security at 79%.These shifting challenges may be the result of organizations becoming increasingly comfortable with cloud security, while needing to manage the greater spend associated with their increased reliance on cloud services. Lack of resources or expertise was reported as a top cloud challenge by 78% of respondents, making it the third major cloud challenge for today’s businesses. ... Cloud cost management responsibilities are often spread across teams within an organization. Year over year, vendor management and finance or accounting teams have less responsibility for cloud expenses. Instead, initiatives are shifting to finops teams. Finops, the practice of cloud cost management, is a growing priority. 


Why IT communications fail to communicate

If you prefer to communicate via documentation — and encourage everyone in your organization to follow suit — four facets of communication are getting in your way. Language: Every natural language, be it English, Latin, or even Esperanto, is imprecise at best. Synonyms are approximate, not exact; words are defined by other words, leading us down the path of infinite recursion; different people bring different vocabularies and assumptions to their attempts to interpret what they’re reading. ... Disambiguation: No matter how even the best writers might try, they’ll never create a document that’s completely free of ambiguity and entangled logic. In making the attempt, many find themselves trudging along the literary path of a different profession for which ambiguity and the likelihood of misinterpretation are equally problematic ... Disagreements: No matter how well a business analyst (going back to our app dev example) describes their design, the stakeholders they’ve worked with to create it aren’t always going to agree on all points. Stakeholder disagreements unavoidably turn into design compromises and, worse, inconsistent specifications.


Cloud Native Testing Trends for 2023

Testing in a cloud native environment can be challenging, as it involves testing across multiple platforms and services, using a diverse set of tools that can vary greatly across teams and workflows. The distributed nature of cloud native applications means that testing must be performed on a larger scale, with more components to be tested. DevOps teams must also consider the impact of the underlying infrastructure on testing, as changes to the infrastructure can affect the behavior of the application. To overcome these challenges, organizations are adopting a cloud native testing strategy that incorporates automation and integrates testing into the development process. ... DevOps engineers are increasingly taking ownership of testing, and tools like Testkube can help them easily integrate testing into their workflows. By taking a collaborative approach to testing, DevOps engineers can ensure that testing is done throughout the development life cycle, reducing the risk of bugs slipping through to production.


Stress-Test Your Software to Prevent a Southwest-Type Calamity

Stress tests typically subject a software system to very large workloads in the form of a high volume of requests or a high rate of failure in individual components. “The idea is to simulate a worst-case scenario with potentially unpredictable side effects,” Padhye says. Testing reveals how a system will react to slowdowns, memory leaks, security issues, and data corruption. “Across performance-based testing, stress tests must be paired with load tests,” Feloney advises. “For example, spike tests examine how a system will fare under sudden, high ramp-up traffic, and soak tests examine the system’s sustainability over a long period.” Stress tests can either be performed in an isolated environment designed for quality purposes, or directly on the live customer-facing deployment. “While it sounds scary, testing a live deployment is far more representative of a real extreme scenario, because it also incorporates the human factor presented by users responding to the simulated events in a hard-to-predict way,” Padhye explains.


Innovating in an economic downturn: 4 tips

During a downturn, you may lose the ability to hire full-time employees but still have things to do and room in your budget. Finance might be more open to a capital expense than an operational expense during these times. This is a perfect opportunity to bring in outside help to take care of your distractions so your team can spend time and energy on innovation. Distractions take a lot of time and effort but aren’t core to what an organization does. For example, organizations today spend a lot of time supporting their applications and systems. As a result, many choose to hire outside firms to handle these activities so that their internal teams can focus on innovation and projects that grow their top line. ... Sometimes you simply don’t have internal resources with an invention mindset or experience innovating. Consultants can help fill the gap, facilitating discussions that drive innovation and partnering with your teams to show them how to work through the innovation process. External experts provide a critical outside perspective and facilitate conversations that drive meaningful innovation.



Quote for the day:

"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis

Daily Tech Digest - March 08, 2023

How AI can help find new employees

AI-based recruitment platforms can find "more diverse talent pools, and [offer] a more accurate approach to qualifying candidates by matching skills rather than on a job title match or other signal,” said Forrester Principal Analyst Betsy Summers. Some of the use cases for talent acquisition platforms are efficiency-oriented, since they’re used for interview scheduling, managing the candidate application process, assisting recruiters with follow-ups, and managing the applicant pipeline. Other platforms also focus on bias mitigation such as adjusting language in job descriptions and candidate communications to be more inclusive. Still others include remote video capabilities that automate early interviews. ... Chatbots are typically employed by recruitment platforms to engage job seekers and ask them about their interests and skills; the bots can then present candidates with open positions for which they’re most qualified to apply.


The EU digital strategy: The impact of data privacy on global business

First, companies may need to assess the impact of the EU digital strategy on their business and their business model and need to identify where changes are required and where additional care needs to be taken with respect to current processes. This specifically applies to the four acts concerning data governance, digital services, AI, and data. Second, companies may need to investigate the possibilities for the applicability of the acts within their organization. This includes possible access to markets that other competitors led in the past through their access to end-user data. Finally, as the EU digital strategy continues to evolve, organizations may be able to further collaborate with governing bodies on the interpretation of the regulations. Specifically, in the case of AI, there are several companies that may find it very challenging to work with their current model in the new guidance. ... Additionally, companies should revisit their current processes for data collection and AI. 


5 best practices for scaling AI in the enterprise

One of the most important challenges of implementing AI is defining the business problem the enterprise is trying to solve. As the saying goes, don’t end up with an answer that’s looking for a question. Simply deploying new forms of technology isn’t the right approach. Next, examine the issues and determine if AI is the best way to tackle the problem. There are other digital technologies well adapted to simple problems. To help ensure success, define the business issue clearly and determine what course to take at the outset — some may not need AI. In automation, the end-to-end process is disaggregated and divided into smaller parts. Each part is then digitized, and the parts are then reaggregated into the value chain. ... So, AI-based transformation is as much about designing a new operating model, cross-skilling the workforce and integrating it into upstream and downstream processes as it is about neural nets and model management. It’s important to note that AI in the enterprise is 20% about technology and 80% about people, processes and data.


Data Privacy: A Public Policy Challenge

In today’s world, improved computational capabilities have enabled businesses and public and private organizations to better structure their data in the form of huge databases and leverage analytics to generate business intelligence and contribute value creation. With these computational and analytical capabilities, there are increasing avenues to develop profiles of humans’ behavior around their purchasing, spending and consumption habits, their genetic profiling, their travel history, medical history, etc. While these capabilities add value to the human society, they also come with risks of intruding into individuals’ privacy. Unfortunately, the discourse around personal data is only centered around its protection from leakage or prevention from breach. However, the primary objective to safeguard the personal data is to ensure that such data are not processed to create a more inequitable society and bring about unfair outcomes. The amount of discrete data available today allows us to bring more nuance and innovations into public policy, therefore aiding in ironing out any imbalances within the society.


Why Database Administrators Are Rising in Prominence

“Currently, there’s a very disjointed relationship between DBAs and the business problem they are solving for customers,” Neiweem says. He points out DBAs are often the last touch point for customers, but this is changing as business and marketing leaders glean deeper insights from customer data and look to achieve personalization at scale. “It’s no longer effective to go through this disconnected channel to get answers about customer data,” he says. “DBAs are now moving into a consulting role where they can take data, analyze and action it, enabling marketing and other internal teams to build stronger relationships with customers through those data insights.” Arun Chandrasekaran, product manager for ManageEngine, adds DBAs are often the first link in the chain of acquiring IT tools. “While the decision-makers decide on what to buy, DBAs can influence their decision,” he says. “Since the responsibility of managing the data warehouse falls on DBAs, they work with the stakeholders to understand the business requirements.


Designing For Data Flow

Put simply, the bottlenecks in designs are being defined by the type and volume of data, and the speed at which it needs to be processed. “SoCs are getting bigger and more complex, fitting everything in the actual chip,” he said. “So data exchange, which used to happen at a system level, is now happening within the IC. This means efficient circuit design for data transfer is required to achieve the overall expected performance. The data flow design at the logic level is quite abstract. In the past, the chips were smaller and mostly driven by specific functionality, so there were only a few stages required to plan for data flow. With bigger chips, this has changed, and more effort is needed to understand the data sampling and placement of the appropriate functional modules next to each other, to achieve optimal data flow.” Data integrity also is becoming a challenge. In addition to crosstalk and various types of noise, which are prevalent at advanced nodes, there are a variety of aging effects that can appear over longer lifetimes, thermal mismatch between increasingly heterogeneous components, and latent defects that can become real defects as the amount of processing required on a chip or in a package increases.


Interacting with Machines through IoT and AI: A Revolution in Home and Workplace Technology

The seamless integration of IoT and AI has completely revolutionized the way we interact with machines, offering novel and innovative solutions for both homes and workplaces alike. With the aid of cutting-edge technologies like machine learning, deep learning algorithms, gesture control, and wearable devices, the potential of IoT and AI to create value across a range of applications is colossal. As these technologies continue to advance, the potential for further groundbreaking advancements in the future is undeniable. It is my sincere hope that this blog has been informative and engaging, offering you valuable insight into the current and future state of these two fields. That said, it is also important to remain cognizant of the potential ethical and privacy concerns that come with their widespread adoption. As with any rapidly-evolving technology, it is essential that we consider and address these concerns to ensure that the development and application of these technologies align with our societal values and principles.


How Skyscanner Embedded a Team Metrics Culture for Continuous Improvement

Changing Culture was probably the part that we put the most effort into, because we recognised that any mis-steps could be misinterpreted as us peering over folks’ shoulders, or even worse, using these metrics intended to signal improvement opportunities to measure individual performance. Either of those would be strongly against the way that we work in Skyscanner, and would have stopped the project in its tracks, maybe even causing irreversible damage to the project’s reputation. To that end we created a plan that focused on developing a deep understanding of the intent with our engineering managers before introducing the tool. This plan focused on a bottom-up rollout approach, based on small cohorts of squad leads. Each cohort was designed to be around 6 or 7 leads, with a mix of people from different tribes, different offices, and different levels of experiences, covering all our squads. The smaller groups would increase accountability, because it’s harder to disengage in a small group, and also create a safe place where people can share their ideas, learnings, and concerns.

Managing data is the key to better citizen services

As important as cyber-resilience is, there are also other issues associated with the unchecked growth of a data estate. Top of mind for public sector CIOs is keeping an eye of the purse-strings and being accountable to taxpayers for the money they spend. Massive amounts of data cost a similarly large amount of funding to maintain, says Mr Hatchuel. “You have to put data somewhere and managing the cost is very challenging for CIOs.” A modern data protection and management solution allows CIOs to manage their data estates in a cost-effective way, as well as keep it secure. The solution should also protect and manage external data sources which, in a contemporary environment, could be a new public cloud service. “Data can pop up anywhere, so you need a holistic solution able to look across the whole data estate and manage and understand different data sources. CIO’s are also advised to understand the Shared Responsibility model of most public cloud services; for the majority of providers, that burden falls to the customer.” 


4 ways for CIOs to strike a balance between operation and innovation

“Striking the right balance between innovation and operations is essential for any organization to succeed and stay competitive. Innovation is about exploring new ideas, embracing change, and striving for progress. On the other hand, operations consist of taking those ideas and making them a reality, efficiently utilizing resources, and ensuring that all the necessary steps are in place to deliver the desired result. ... “The load that IT organizations carry with snowballing technical debt has a direct and tangible drain on IT innovation. While it’s obvious on the surface, every dollar spent on technical debt is a dollar that IT cannot invest in innovation and transformation. Maintaining, securing, and operating critical but aging applications and infrastructure is a boat anchor that drags down innovation and must be addressed continuously by IT leadership, architects, and CTOs before it blows up in a disaster. Start by eliminating the 'kick the can down the road' strategy of ignoring technical debt; instead, prioritize actual application modernization investments that can break the pattern and open up innovation cycles as part of a continuous modernization strategy.”



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - March 07, 2023

The four qualities of resilient teams

The first quality is team confidence, or the belief that the team can handle just about anything that comes its way. Team confidence, the authors note, isn’t really the sum of a lot of individual confidence, for swollen egos don’t benefit the team. The goal is collective and mutual confidence. And not too much, because overconfidence undermines success. “Moderately high confidence offers a healthy balance of confidence and caution,” the authors write. To build team confidence, managers are urged to make goals and processes clear, empower the team by encouraging members to participate in decision-making, cheer successes, and provide useful feedback during struggles. The second quality is having the foresight to create a teamwork road map, or a plan that “reflects the extent to which all team members know what their own roles and responsibilities are, and the extent to which they agree on what all other team members’ roles and responsibilities are. Team members may even know how to perform one another’s roles so that at any point, one person can step in for another.”


What is zero trust? A model for more effective security

Removing that implicit trust takes time, according to experts, and most organizations are far from accomplishing that objective. “It’s a journey of change,” says Chalan Aras, a member of the Cyber & Strategic Risk practice at Deloitte Risk & Financial Advisory. Zero trust is also a collection of policies, procedures, and technologies. Organizations that want to implement an effective zero-trust strategy must have an accurate inventory of assets, including data. They must have an accurate inventory of users and devices as well as a robust data classification program with privileged access management in place, Valenzuela says. Other components include comprehensive identity management, application-level access control, and micro-segmentation. Another important element is user and entity behavior analytics, which uses automation and intelligence to learn normal (and therefore accepted and trusted) user and entity behaviors from anomalous behaviors that shouldn’t be trusted and therefore denied access.


Will ChatGPT make low-code obsolete?

Unlike technologies of the past which typically automate or speed-up a repetitive process (manufacturing, logistics, transportation etc.), ChatGPT does something entirely new – enhancing the creativity of the user. While we can debate whether this is true creativity or not, ultimately if the outcome is the same, is it not still creative? Think of how ChatGPT could help a software developer crack a particularly challenging piece of code, or how it could optimise existing code. It can also help developers be more creative by reducing the repetitive/boring part of their jobs so they can focus on the parts they love, leaving them more time to flex their creative muscles. Going beyond the developer use case, and ChatGPT has the ability to democratise coding itself by providing a way for non-coders to develop applications themselves – in much the same way that low-code promises, but on steroids. This “democratisation of IT” promises a new wave of innovation by enabling organisations to create new processes without the new to engage with IT at all. ChatGPT could achieve the same outcome as low-code but in half the time.


SBOMs should be a security staple in the software supply chain

NIST's standard includes multiple elements, from the software component used and its supplier to version numbers and access to the component's repository. Version levels must be evaluated against release levels, potential threats found, and risks determined. "Unwinding large applications, from open-source operating systems, to in-house developed applications, to third-party 'shrink-wrapped' stacks is fraught with contextual challenges, inventory methods, and manual verification, all of which are prone to error," Masserini writes. While the process of identifying and reporting issues is codified, "it does not address the issue of manually maintaining such an inventory and consistently validating its contents," he says. Automation must be put into every step of the process, from generating and publishing SBOMs to ingesting them – and then bring vulnerability remediation into their current app security programs without having to adopt new workflows, Lambert says. There are other considerations. SBOMs deliver a lot of information, but organizations need to decide how they're going to use it. 


Digital twins could be the key to successful automation

The primary advantage of the digital twin is that it evolves as automation evolves. As a result, if any changes are applied to the automation in the RPA platform, those same changes are reflected in the twin, ideally in real-time or at least near real-time. Operational metrics are also accessible and displayed where the twin resides so that it can be monitored and continuously improved. Beyond changes and operational metrics, a digital twin in automation enables an organization to compile accurate documentation and detailed audit trails for the entire automation estate and maintain it in a single, centralized repository. Doing so not only addresses the problem of misplaced or lost process design documents, but also solves one of the major pain points of automating: An inability to visualize and understand how automations have changed over time. Maintaining digital twins for all automations in a central location — regardless of the RPA platform in which they are designed, deployed and orchestrated — vastly improves automation standardization, governance and visibility.


Stepping up: Becoming a high-potential CEO candidate

Stanford University economics professor Nicholas Bloom, who’s spent his career researching CEOs, describes the reality he’s observed: “It’s frankly a horrible job. I wouldn’t want it. Being a CEO of a big company is a hundred-hour-a-week job. It consumes your life. It consumes your weekend. It’s super stressful. Sure, there’re enormous perks, but it’s also all encompassing.” Reinforcing the point, Microsoft CEO Satya Nadella describes the job as “24/7.” His late mentor Bill Campbell, who had been a CEO three times and was an influential coach to several technology industry leaders, would often remind him, “No one has ever lived to outwork the job. It will always be bigger than you.” Many CEOs secretly agree that the best job in the world is actually the one right below the CEO. There the spotlight burns less brightly, yet the opportunities to make a difference are great, as are the rewards. Without the right motivations and expectations, not only will you find that the effort required to be CEO outweighs any personal gain, but you will also be less likely to succeed. As CCHMC’s Fisher puts it, 


Enterprise IT moves forward — cautiously — with generative AI

The technology also needs human oversight. “Systems like ChatGPT have no idea what they’re authoring, and they’re very good at convincing you that what they’re saying is accurate, even when it’s not,” says Cenkl. There’s no AI assurance — no attribution or reference information letting you know how it came up with its response, and no AI explainability, indicating why something was written the way it was. “You don’t know what the basis is or what parts of the training set are influencing the model,” he says. “What you get is purely an analysis based on an existing data set, so you have opportunities for not just bias but factual errors.” Wittmaier is bullish on the technology, but still not sold on customer-facing deployment of what he sees as an early-stage technology. At this point, he says, there’s short-term potential in the office suite environment, customer contact chatbots, help desk features, and documentation in general, but in terms of safety-related areas in the transportation company’s business, he adds, the answer is a clear no.


Career paths for devops engineers and SREs

Solving business challenges today requires multidisciplinary teams and integrated solutions. If you enjoy problem-solving, shift to other organizational roles and develop broader perspectives on what’s required to deliver end-to-end solutions. One opportunity for developers is to shift to data science and machine learning roles. Tiago Cardoso, a product manager at Hyland, says, “Career paths for developers have become much more flexible and individualized, and I’m seeing a lot of new developer roles appearing, such as data engineers, ML engineers, ML architects, and MLops engineers. He adds, “Common career paths for those in devops and SREs include positions such as systems administrator, infrastructure engineer, and cloud architect.” ... Architect roles and responsibilities vary considerably from one organization to another, but successful architects are more than just technical experts. Architects scale their expertise by helping agile teams learn, apply, and create self-organizing standards around using technology to deliver business solutions.


Zero-Day Vulnerabilities Can Teach Us About Supply-Chain Security

Writing, testing and validating whether a fix will resolve a vulnerability can take trial and error. By definition, zero-days don’t have a patch, meaning it can often be days before developers can even begin the process of patching their applications. Furthermore, software needs to go through QA cycles before a true fix is identified. This is why security controls are necessary for blocking malicious activity before it reaches runtime. Additionally, developers must analyze their software development life cycle (SDLC) and augment it before a vulnerability is announced. An asset or application inventory should be a mandatory component so that when a vulnerability is disclosed, organizations know who owns the application and who to contact. ... Securing third-party or commercial-off-the-shelf software is one of the biggest cybersecurity challenges facing every organization. Unfortunately, most vendors don’t disclose the components and libraries that make up their software, making it difficult for organizations to know whether a vulnerability affects them once it’s disclosed.


Five Factors That Turn CISOs into Firefighters

When a CISO is referred to as a “firefighter,” it typically means that they are spending a significant amount of time responding to security incidents and putting out fires rather than being able to focus on proactively preventing those incidents from occurring in the first place. Here are some reasons why a CISO may become a firefighter: 1. Lack of resources: A CISO may not have sufficient resources (e.g., budget, staff, or technology) to implement a comprehensive cybersecurity program effectively. This can lead to security incidents that require a reactive response. 2. Insufficient risk management: A CISO may not have a robust risk management program in place, which means that security incidents are more likely to occur. Without proper risk management, a CISO may be caught off guard by security incidents and have to react quickly to mitigate the damage. 3. Lack of security awareness: Employees may not be properly trained on cybersecurity best practices, which can lead to security incidents such as phishing attacks or malware infections. ...



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye