Daily Tech Digest - March 14, 2023

Should There Be Enforceable Ethics Regulations on Generative AI?

Companies might claim they conduct ethical uses of AI, she says, but more could be done. For example, Rudin says companies tend to claim that putting limits on speech that contributes to human trafficking or vaccine misinformation would also eliminate content that the public would not want removed, such as critiques of hate speech or retellings of someone’s experiences confronting bias and prejudice. “Basically, what the companies are saying is that they can’t create a classifier, like they’re incapable of creating a classifier that will accurately identify misinformation,” she says. “Frankly, I don’t believe that. These companies are good enough at machine learning that they should be able to identify what substance is real and what substance is not. And if they can’t, they should put more resources behind that.” Rudin’s top concerns about AI include circulation of misinformation, ChatGPT putting to work helping terrorist groups using social media to recruit and fundraise, and facial recognition being paired with pervasive surveillance. 


6 reasons why your anti-phishing strategy isn’t working

Many organizations are trying to solve the phishing problem solely with technology. These companies are buying all the latest tools to detect suspicious emails and giving employees a way to report then block those suspicious emails, says Eric Liebowitz, chief information security officer, Americas at Thales Group, a Paris-based company that develops devices, equipment, and technology for the aerospace, space, transportation, defense, and security industries. While doing that is great, in the end the bad actors are always going to be more sophisticated, he says. “One of the big things that I don't think enough organizations are focusing on is training their employees,” Liebowitz says. “They could have all the greatest tools in place, but if they're not training their employees, that's when the bad thing is going to happen.” While some organizations have deployed the right tools and have workflows and processes in place to combat phishing campaigns, they haven't adequately and proactively configured those tools, says Justin Haney, executive, North America security lead at Avanade.


Observability and Monitoring in the DevOps Age

The first step towards achieving success is to know what to measure and monitor. Your business technology ecosystem may be comprised of different modular applications with all sorts of possible dependencies. It is important to first lay out the key indicators that must be tracked if engineers are to find remedies when unusual behavior is observed. These indicators are not just internal operational metrics but also customer-facing ones like performance and speed of page loads, erroneous crashes of web interfaces, etc. The key to finding the best remedy for any unexpected defect or bug is to trace the root cause of the problem. This means developers and QA engineers must be able to navigate the exact workflow that resulted in a defective output. For this, traceability is an essential factor in every transactional workflow. It helps DevOps teams understand how data and insights are passed between different systems when a transactional request is processed. 


The importance of optimising IoT for retailers

Innovation has radically altered the retail customer experience, with e-commerce and brick-and-mortar stores redefining the way the world shops. By supporting customers, interactive digital terminals, virtual and augmented reality tools, and robotic sales assistants have increased both business efficiency and customer satisfaction. Furthermore, retailers have implemented IoT technologies to optimize existing processes. Radio-frequency identification (RFID) tracking is utilized to streamline warehouse inventory processes, enabling efficient asset management. Once consumers venture to stores, cameras and sensors are employed for footfall tracking purposes, and Wi-Fi connections can detect repeat customers and target them with digital advertisements beyond their trip to the store. While these modifications have improved the retail experience for customers, they may also have increased network traffic and masked visibility for IT teams, complicating operations and performance management.


3 ways data teams can avoid a tragedy of the cloud commons

Industry analysts estimate that at least 30% of cloud spend is “wasted” each year — some $17.6 billion. For modern data pipelines in the cloud, the percentage of waste is significantly higher, estimated at closer to 50%. It’s not hard to understand how we got here. Public cloud services like AWS and GCP have made it easy to spin resources up and down at will, as they’re needed. Having unfettered access to a “limitless” pool of computing resources has truly transformed how businesses create new products and services and bring them to market. For modern data teams, this “democratization of IT” facilitated by the public cloud has been a game-changer. For one thing, it’s enabled them to be far more agile as they don’t need to negotiate and justify a business case with the IT department to buy or repurpose a server in the corporate data center. And as an operational expenditure, the pay-by-the-drip model of the cloud makes budget planning seem more flexible. However, the ease with which we can spin up a cloud instance doesn’t come without a few unintentional consequences — forgotten workloads, over-provisioned or underutilized resources — with results including spiraling and unpredictable costs.


3 Reasons Women Should Reskill to Work in Cybersecurity

According to an (ISC)² study, women make up roughly a quarter of the overall cybersecurity workforce. We’ve come a long way over the last decade (women made up about 10% of cybersecurity jobs in 2013), but we know the industry needs to work toward even greater diversity. Addressing the gender gap starts with sparking interest at a young age. We can also get creative with our most passionate and loyal current employees and realize not every cybersecurity role is a ‘special snowflake.’ There are many open roles that call for in-depth skills that have been honed and developed over time. What about all the roles that don’t? Here’s the secret: Not everyone who works in cybersecurity needs to be a cybersecurity expert. At least not right away. Cybersecurity expertise can be taught or learned. So, one way to get closer to bridging the talent gap is to reskill talent from other professions.


How Aerospike Document Database supports real-time applications

A real-time document database should have an underlying data platform that provides quick ingest, efficient storage, and powerful queries while delivering fast response times. The Aerospike Document Database offers these capabilities at previously unattainable scales. JSON, a format for storing and transporting data, has passed XML to become the de facto data model for the web and is commonly used in document databases. The Aerospike Document Database lets developers ingest, store, and process JSON document data as Collection Data Types (CDTs)—flexible, schema-free containers that provide the ability to model, organize, and query a large JSON document store. The CDT API models JSON documents by facilitating list and map operations within objects. The resulting aggregate CDT structures are stored and transferred using the binary MessagePack format. This highly efficient approach reduces client-side computation and network costs and adds minimal overhead to read and write calls.


Taming the Data Mess, How Not to Be Overwhelmed by the Data Landscape

One common thing that happens is that people get pressed a little bit because of marketing or because of buzz of the peers about what is the next nice thing everybody's using? Sometimes this is not appropriate for the problems you have. This is something that happened to me with a friend a long time ago, who asked me, "You see there is these companies. Now we're hearing a lot about this thing called feature stores, and we want to do machine learning in our company. This is a little startup. Maybe I need a feature store to this." That's not the right way to approach this problem. The first question I asked was, do you already have data available and is this data clean and of good quality? Do you have processes to keep it like that? Because it's not only a one-time shot, you have to keep these things going on. More important, is that data already used in your business? Do you have reports, or things like that? This is all the steps that were pretty important, even before thinking about machine learning and stuff like that.


4 data challenges for CIOs

CIOs are becoming more aware that they cannot afford to have hidden stores of data or data that exists in siloes. Siloed data may be unknown to the teams that can derive the most value from it, which undercuts its value. And if there are data stores that are completely unknown to a company, there’s no way to protect them. Question your past assumptions about how and where your organization stores and processes data. The results of an end-to-end data discovery process across the network will bring to light previously unknown security issues. To better understand the flow of data, open lines of communication throughout the company. Meet with people at all levels, including the CISO, security manager, operations manager, IT service manager, and individual IT staff, to ensure everyone’s data usage and goals align. Ask employees their perspective on what’s working, what changes must be addressed, and where data blind spots may be. Involving all departments responsible for data improves knowledge and skills and ensures a stronger data strategy overall.
With sustainability gaining importance to companies, the CFO’s value-creation mandate calls for a new way of thinking and making decisions—one that uses sustainability information to guide choices about how to deliver strong performance for investors. Working with the CEO and governing board, today’s CFOs are integral to the development of a sustainable business model, bringing financial and nonfinancial data to bear on a company’s strategic goals, its resource and capital allocation, and its measures of long- and short-term performance. ... In bringing sustainability factors into business decisions, CFOs must also pay attention to the specific priorities of each region where their organization has a presence. In Asia, for example, the United Nations’ Sustainable Development Goals have identified both greater developmental needs than in other regions and greater susceptibility to climate risk. CFOs there need to consider how to be ambitious about decarbonization in a way that is also considerate of local communities, whose interests may differ from those of communities in other regions. 



Quote for the day:

"People buy into the leader before they buy into the vision." -- John C. Maxwell

Daily Tech Digest - March 13, 2023

CFO Cybersecurity Strategies: How to Protect Against the Rising Storm

Think of cybersecurity as an investment in resiliency. Taking a comprehensive approach to cybersecurity increases the odds that your organization will not only identify malicious activity and successfully deflect attackers, but also respond effectively and recover with minimal impact if a worst-case scenario unfolds. However, you need to proactively validate that your company’s approach is truly comprehensive. Historically, cybersecurity has assumed the purview of IT, while the reality of cybersecurity is much more complex and pervasive. While IT can manage and solve many risks, every leader in an organization has a role to play, from governance, legal, compliance, public relations, human resources, etc. So does every third party including your vendors, suppliers, contractors, service providers, and customers. So, it’s not only about technology, but people and processes as well. Simply put, cybersecurity is like a tree with a complex root system.


The problem with development speed

Less code, but more impact. That’s the formula for success. But it’s not what many development teams do. For too many, as Gilad details, “a product group that has two-thirds of the output really [can] create four times the impact.” The key, he stresses, is that “most of what we create is a waste, [so] chasing output is actually creating more waste, faster.” All of which sounds great, but telling developers to “do more good things and fewer bad things,” is hardly actionable. The trick, Gilad outlines, is to introduce more research and testing earlier in the development process, coupled with a willingness to scrap incomplete projects that aren’t on track for success. It’s not that developers will sit around thinking about success but not shipping. Rather, “you should increase throughput, but not of launches.” Instead, focus on running more “tests and experiments.” By doing so, you’ll end up with fewer projects but ones with a higher impact. This willingness to shed bad code early can make a huge difference.


Tapping AI to Alleviate Cloud Spend

Vladimirskiy says CIOs and executives overseeing an organization’s IT strategy are responsible for evaluating and implementing effective AI-based cloud optimization solutions. Because the efficacy of an AI-based cloud optimization system is based on how well-trained the model responsible for managing the corresponding workload is, it’s not advisable for organizations to start from scratch. “Vendors who focus specifically on this type of optimization will have access to more in-depth data across multiple organizations to train these models and ultimately create successful AI cloud optimization solutions,” he says. Diaz agrees the key stakeholders when it comes to implementing AI to manage cloud spending and control costs are primarily IT management, but finance plays a key role. ... Finance is involved as the final stop when it comes to paying for cloud resources, controlling what portion of the organization’s budget goes into both the cloud resources, and the AI technology used to help manage the cloud.


Contract-Driven Development – A Real-World Adoption Journey

In our search for an alternate solution, we wanted to try contract-driven development because it seemed to satisfy our initial criteria.Parallel development of provider and consumer applications API specification is the API contract (instead of two separate artifacts). An automated technique other than a code generation-based technique to ensure providers were adhering to contract. More emphasis on API design and promoting collaboration among teams in the process However, the teams were also skeptical about contract-driven development because this again involved API specifications and contract testing, both of which they had already tried and had seen a low return on investment. However, addressing these concerns was a great starting point for us to get the teams started on contract-driven development. We felt the most convincing way of achieving this would be through a real-world example in their context and taking it to production. ... To gain more buy-in, we set out to introduce contract-driven development to just a handful of teams working on a feature that cut across two or three microservices and frontend components.


The importance of measurement in closing the sustainability gap

Good engineering practice, such as edge-caching, optimised data storage, reusability and code efficiency can almost always have a positive impact on sustainability. Applications that require less compute power use less electricity, which ultimately leads to a net reduction of CO2-like emissions. It is important to take these factors into account when choosing architectural options and following green engineering best practices. The gains may be small at the level of the developer but become clearly significant when scaled-up to production levels. Quantitative measurement is essential to evidence that improvement. Sustainability has also become part of DevOps vocabulary as GreenOps, focusing on improving continuous integration and delivery from the perspective of reducing emissions. A critical part of this role is adding sustainability reports to existing dashboarding approaches, giving organisations a real-time window on that closing sustainability gap. The key is managing customer and organisational objectives throughout, and treating sustainability like a transformation programme. 


Fighting financial fraud through fusion centers

The boundaries between cybersecurity and fraud/financial crime have been blurred in recent years. Indeed, cyberattacks on financial services are often the first stage of fraud taking place. Take common attacks like phishing or account takeovers for example. Are these cyber-attacks, fraud, or both? And fraud isn’t always an immediate process; some fraudulent schemes are going on for years. Who has responsibility for what, and when? The truth is that cyber-attacks and fraud are now too closely linked to be considered separately. But many firms still have investigative fraud teams and cybersecurity teams operating independently, along with the systems and processes that support them. As a result, these teams have different levels of access to various data repositories, and do not necessarily use the same toolsets to analyze them. That data is arriving at fluctuating speeds, in multiple formats, and in huge volumes. Some firms may have to navigate a complex legacy technology environment to access that data. In short, there is no consistent context within which a unified decision can be made.


Schneider Electric CIO talks IT staffing, sustainability, and digital transformation

“All workers, including IT workers, must have a connection to their company’s mission, and ownership over what their company’s goals and values are. At Schneider Electric, it is important that IT workers understand what we do as a business, in addition to our overarching mission of creating and offering solutions to help our customers. This attitude creates awareness, as well as dedication to their role within IT and the broader company.” “As for the specific traits of these workers, one that is learned is what we call the power couple model — a domain and digital leader — when the business leaders and technology leaders complement each other by playing different roles in solutioning. The domain, or business leaders, are responsible for the 'what' and the 'why,' while the digital leaders are responsible for the 'how' and the 'when.' They do this through leveraging new technology to offer the most efficient solutions to customers and create a beneficial partnership.”


Tech purchasing decisions – how many involved?

The key to good decision-making is having the department that best understands how the technology will be used involved, but also ensuring that the leaders who really understand technology are in the room as well. That means understanding, from the beginning, who needs to be involved in the decision and ensuring they are in the conversation. For those selling the technology, whether through brand awareness or lead generation campaigns, it means realising that targeting a single decision-maker who works in the specific function where their technology is used is the wrong strategy. Yes, those selling, for example, martech need to have the marketing function on board, but they also need buy-in from IT, procurement, sales, finance and HR. ... The business world is becoming increasingly interconnected, meaning the impacts of the decisions leaders make are far more wide-reaching. Whether companies are considering their sustainability strategy, navigating supply-chain risk or purchasing a new CRM system, leaders increasingly need to understand what is happening outside their function and how it impacts them. 


Can AI solve IT’s eternal data problem?

Most enterprises today maintain a vast expanse of data stores, each one associated with its own applications and use cases—a proliferation that cloud computing has exacerbated, as business units quickly spin up cloud applications with their own data silos. Some of those data stores may be used for transactions or other operational activities, while others (mainly data warehouses) serve those engaged in analytics or business intelligence. To further complicate matters, “every organization on the planet has more than two dozen data management tools,” says Noel Yuhanna, a VP and principal analyst at Forrester Research. “None of those tools talk to each other.” These tools handle everything from data cataloging to MDM (master data management) to data governance to data observability and more. Some vendors have infused their wares with AI/ML capabilities, while others have yet to do so. At a basic level, the primary purpose of data integration is to map the schema of various data sources so that different systems can share, sync, and/or enrich data. The latter is a must-have for developing a 360-degree view of customers, for example.


Navigating Your Data Science Career

Similar to finding your passion as a data scientist, finding new opportunities to diversify skill sets and experience is extremely helpful when trying to grow your career. There are many business sectors that require data science. Many of my retail coworkers have gone on to have great careers in media, finance, supply chain, social platforms, banking, and many other industries. Having a diverse background can open many more opportunities in the future. Not only can having a robust background be more attractive to recruiters, it can also be helpful in case a market downturn occurs in a given business sector which may limit future career opportunities. Although exploring different data science fields can be beneficial, as a developing data scientist you often have many opportunities to expand your skill set within your current company. Take retail, for example; data science expertise is required in sectors such as marketing, pricing, logistics, and merchandising. Being open to new positions provides the opportunity to gain new industry knowledge and become a more valuable and well-rounded employee.



Quote for the day:

"Leadership is intangible, and therefore no weapon ever designed can replace it." -- Omar N. Bradley

Daily Tech Digest - March 12, 2023

What is DevSecOps and How Does it Work?

DevSecOps is a methodology that emphasizes integrating security practices into the software development process. The idea is to promote collaboration and communication among development, security, and operations teams to incorporate security throughout the entire software development lifecycle. DevSecOps is a combination of three words: Development + Security + Operations. The approach acknowledges that security is an integral part of the software development cycle, and we should integrate it right from the beginning instead of treating it as an afterthought. ... Incorporate security practices as early as possible in the software development lifecycle. It is because the entire DevSecOps team is collectively responsible for ensuring the security of your system. By implementing security from the beginning, your team can discover and fix security threats early, providing smooth delivery cycles. ... DevSecOps can significantly increase your chances of success by ensuring the software you develop is free of any issues. However, getting it right is a real challenge.


The AI Act: What Does It Mean for Patenting Products

The Act lists three categories of AI systems. The first one relates to systems associated with an ‘unacceptable risk’. It includes systems which seek to manipulate vulnerable persons, social scoring and the use of real-time biometric data, such as face recognition (with limited exceptions for law enforcement). These systems are simply prohibited in the EU. The second category is ‘high risk’. There are two main parts to this: systems which are key to safety and systems which could potentially be socially damaging, such as those systems where bias could be particularly harmful. For instance, AI systems associated with access to opportunities in life, such as education, employment, credit scores, and public services, fall into this category. The Act is intended to ensure that everyone is treated fairly and not subjected to prejudice or discrimination baked into an AI system. The AI act introduces additional burdens in bringing such systems to market if they have an AI element.
 

Don’t Get Caught Off Guard: A Roadmap to Cyber Resilience

The term cybersecurity and cyber resilience have been used interchangeably by many. While both share the same objective, implementation is where they differ. While cybersecurity emphasizes on deploying strategies that prevent cyber-attacks from penetrating the systems, cyber resilience is a holistic approach that encompasses resisting, navigating, and surviving the entire lifecycle of an attack. In short, cyber resilience is a broader scope of cybersecurity. According to the World Economic Forum’s 2022 Global Cybersecurity Outlook, the average cost of a corporate breach is $3.6 million per incident, and it takes roughly 280 days to identify and address a penetration. This survey in itself calls for the need for a game plan. Building defenses along the perimeters, and following a siloed approach are methods of the past years. Considering the massive attack landscape that currently exists, business leaders must steer towards a holistic cybersecurity strategy that involves identifying and securing all vulnerable endpoints. 


It’s a weird, weird quantum world

Shor’s work was the first to show that a quantum computer could solve a real, practical problem. His talk set the seminar abuzz, and the news spread, then became conflated. Four days after his initial talk, physicists across the country were assuming Shor had solved a related, though much thornier problem: prime factorization — the challenge of finding a very large number’s two prime factors. ... “It was like the children’s game of ‘telephone,’ where the rumor spread that I had figured out factoring,” Shor says. “And in the four days since [the talk], I had!” By tweaking his original problem, Shor happened to find a similar quantum solution for prime factorization. His solution, known today as Shor’s algorithm, showed how a quantum computer could factorize very large numbers. Quantum computing, once thought of as a thought experiment, suddenly had in Shor’s algorithm an instruction manual for a very real, and potentially disruptive application. His work simultaneously ignited multiple new lines of research in quantum computing, information science, and cryptography.


Why You Should Give a Damn About Software Design

The Factory Design Pattern is a programming concept that allows you to create objects in a more flexible and controlled way. Imagine you need to create many products for your store, but each object is created differently based on some conditions. For example, if you were building cars, you know that they will all require at least 4 wheels, a gas tank, an engine, and so forth, but every car will have a unique color, shape, year, and model. Instead of creating each car entirely from scratch, you can build a blueprint to determine exactly how each car should be engineered. No need to keep returning to the drawing board. The factory has a method that takes in some parameters and based on those parameters, it creates the appropriate object and returns it to you. This way, you can create many objects easily and you can change how the objects are created by changing the factory’s method, instead of changing the entire program.


Good Things Happen When DevSecOps and Cloud Collide

Cloud-based data is accounting for 39% of successful cyberattacks. Containerized applications, which have been a boon to both migration and management can also lead to vulnerabilities – which is fitting for security to be cited as a top concern for more than half of the organizations surveyed. ... The idea is simple: You must find a way, a process, a method, and the right partners to help secure all workloads across any cloud environment, regardless of the platform or the amount of data and application real estate needed. By establishing this model, organizations are able to create a fundamental layer of protection against the ever-evolving threat of cybercriminals. Take one of our large banking customers, for example, who runs critical applications on AWS with stringent security and compliance requirements. We implemented a secured framework to protect their applications running on modern, cloud-native services like containers and Lambda functions using DevSecOps principles and cloud-native SIEM solutions. 


Every third employee in IT will soon be a gig worker- are you one of them?

For enterprises, engagement with the gig workforce ensures cost savings, flexibility of an ad-hoc, project-based working model that can be scaled or descaled quickly, enable quick onboarding, and access to highly skilled, niche talent. However, engaging with gig workers comes with its own set of challenges, including concerns around data security, IP theft, access management, cultural orientation, etc. These challenges span across planning, onboarding, execution, and payment phases in the lifecycle of gig workers. The study reveals that more than 70 per cent of CXOs feel that onboarding and execution are the two difficult yet crucial phases, addressing which can enable widespread adoption of the gig economy model. Technologies such as cloud, artificial intelligence (AI), and cybersecurity are being leveraged to address such challenges in a transparent and productive way. Cloud technology, which enabled the seamless transition to remote work, will be critical in addressing the challenges of the gig economy. 


Cyber Resilience More Than A Software Problem

From our unique position in the BIOS of millions of active devices, we can see security applications from the world’s leading security companies, running in some of the most sophisticated security environments by some of the strongest cyber teams and still be operating at 60 to 70 per cent resiliency — meaning they are only installed, running and healthy across 60 to 70 per cent of the devices where they are required for compliance. Another way to think about that is $0.30 to $0.40 of every dollar spent could be wasted if those controls are not healthy and working to protect the user. That complexity is what we need to tackle for certain. And understanding that the end result will never be zero risk — resiliency in spite of complexity is what Absolute Resilience does that no one else can do. We leverage our unique Persistence technology, already in the device itself, to self-heal these applications automatically — to restore, repair, or even reinstall an application and help to close that seemingly insurmountable gap.


Enterprise Architecture Vs Solution Architecture – Let the Comparison Begin

Enterprise architecture (EA) in an organization is often defined as the organizing logic for business processes and infrastructure. The primary purpose of creating enterprise architecture is to ensure that business strategy and IT are aligned. Enterprise architecture should make it possible for an organization to achieve traceability from the business strategy down to the technology used to implement that strategy. Enterprise Architecture is a journey which acts as a collaboration force among:Business planning (strategic) such as goals, visions, strategies, capabilities and governance principles. ... Solution architecture is a process of architecting, designing, and managing the technical and operational architecture of a solution to meet specific business needs of a business unit of an organization. ... The characteristics of Solution Architecture are:Modular Design – Architecture should always follow a modular component-based designs rather than monolithic blocks of system for easier management and change


How banks can use seven levers to modernize their core systems

To simplify the CBS, banks can explore three options focused on the removal and carve-out of unused or unneeded modules. Rationalize customizations or modules: Banks should analyze unused modules within the CBS code base, screen components, and evaluate other business logic and remove if necessary. This analysis includes the identification of unwanted customizations of an off-the-shelf platform. McKinsey analysis shows that only 10 percent of existing core-banking-system customizations are regulatory driven or business critical. Carve out master-data components: In most cases, customer data is stored within the core banking system. However, requests directed to core banking for basic products, customer data, and pricing data create significant workloads and costs. To simplify, banks can carve out such functionalities and data, allocating them to dedicated master databases and thus reducing the overall load of the CBS.



Quote for the day:

"Good leadership consists of showing average people how to do the work of superior people." -- John D. Rockefeller

Daily Tech Digest - March 11, 2023

4 Reasons to Outsource Large IT Projects During Economic Headwinds

Great professional services teams accumulate best practices over time and will bring complementary skill sets into the business they’re partnering with. Shared knowledge helps grow the skillset of your internal team, and enables them to contribute more meaningfully to the success of your business. Your employee satisfaction can even increase from personal and professional progress felt when learning new technology, frameworks, or languages throughout major IT projects developed in partnership with external experts. This aspect cannot be overlooked, given that 91% of employees report being frustrated with inadequate workplace technology and 71% consider looking for a new employer as a consequence. Expert teams have the depth of knowledge on a breadth of tools that help save tremendous time and many headaches by creating ​efficient, automated workflows. Ensure that your team gets the opportunity to work directly with your outsourced development team to facilitate knowledge sharing.


Migrating to the Cloud: Is It as Intimidating as It Appears?

Being Cloud Native is often considered crucial for business success in the current business landscape. However, the perception of becoming “Cloud Native” as a drastic change for a business might not necessarily be accurate. In this article, we will delve into the concept of Cloud Migration and its effects on the IT support infrastructure of your business. ... “Cloud Native Services” are those services and infrastructure specifically designed to run on cloud platforms, hosted and maintained by Cloud Providers. These services can include a variety of offerings, such as virtual machines (VMs), application servers, VPNs, load balancers, routers, databases, and disk storage. They can be divided into three main categories: compute services, network services, and storage services. ... Throughout the entire project, it is essential to continuously monitor and manage the cloud environment to ensure that it remains secure, cost-effective, and aligned with the business objectives. 


DevOps as a Graph for Real-Time Troubleshooting

A lot of data for observability is interrelated, but our current tools don’t allow us to view metrics, logs and distributed traces as connected sources of information. These data types are often collected in siloes, and correlation of the data is done manually. For example, to know if a spike in a metric on one service might have something to do with a spike on another service, we often search through the metric charts to find the correlation. You need a solution that connects operational data from the start. ... Digital Enterprise Journal’s recent State of IT Performance Report found that change is the largest source of production issues. 75% of all performance problems can eventually be traced back to changes in the environment. When simple configuration errors can cause a domino effect, there’s a broader lesson to be learned. If you’re not capturing code or configuration changes as part of your observability strategy, it’s time to close that gap.


New variant of the IceFire ransomware targets Linux enterprise systems

Known up to now to target only Windows systems, the IceFire malware detected by SentinelLabs uses an iFire extension, consistent with a February report from MalwareHunterTeam — a group of independent cybersecurity researchers analyzing and tracking threats — that IceFire is shifting focus to Linux enterprise systems. Contrary to past behavior targeting technology companies, the Linux variant of IceFire was observed attacking media and entertainment companies. The attackers’ tactics are consistent with those of the "big-game hunting" (BGH) ransomware families, which involve double extortion, attacks against large enterprises, the use of numerous persistence mechanisms, and evasion tactics such as deleting log files, according to the SentinelLabs report. Double extortion occurs when attackers steal data as well encrypting it, and usually ask for ransom that's double the usual payment.


We chat with ChatGPT itself about the future of AI

AI has already been quite transformative, particularly in the digital marketing industry. How do you think this is going to evolve? One area where AI is likely to have a significant impact is in personalized marketing. By analysing vast amounts of data on individual consumers, AI algorithms can help marketers tailor their messages and campaigns to specific demographics, interests, and behaviours. This can lead to more effective marketing strategies and higher engagement rates. Another area where AI is likely to make a big impact in digital marketing is in automation. By automating routine tasks such as social media posting and email marketing, AI can free up marketers to focus on more strategic activities, such as campaign planning and content creation. Additionally, AI can help improve the accuracy of targeting and reduce wasted ad spend by predicting which users are most likely to convert. Finally, AI-powered chatbots are likely to become increasingly prevalent in the digital marketing space.


What Does the National Cybersecurity Strategy Mean for Public and Private Stakeholders?

Making that shift a reality is going to mean creating incentives. “We must shift incentives so that when entities across the public and private sectors are faced with the trade-offs between easy but temporary fixes and durable, long-term solutions, they have the resources, capabilities, and incentives to consistently choose the latter,” an Office of the National Cyber Director (ONCD) spokesperson said in a statement to InformationWeek. Regulation will be a necessary element in incentivizing this fundamental shift in responsibility. “Our strategy reflects the reality that voluntary measures will not be enough to deliver the cybersecurity posture we need to enable our digital society,” according to the ONCD spokesperson. While new regulation certainly has a role to play, so do other forms of incentive. “Simply adding mandates and regulation could have detrimental economic impacts, promote a ‘bare minimum’ approach to compliance and pass costs downstream. 


Legal Industry Faces Double Jeopardy as a Favorite Cybercrime Target

It isn't just the sensitivity of the data that legal firms handle but also the scope and detail of data that can be dug up by attackers who successfully breach a single firm — especially if it's a large one. One attack can be a one-stop shop for monetizing the data and access stolen from not just one organization, but a whole portfolio of them. "Law firms connect with and support many clients at any given time. Compromising one law firm gives bad actors access to numerous client networks without having to directly reach each one of them," says Michael Tal, technical director for Votiro, a cloud file security firm that works extensively with the legal industry. "Files are the primary form of communication and weaponizing them gives bad actors a sure way to get the clients to open and infect the clients." ... The other attractive element for hackers is that law firms and legal services companies tend to be very soft targets.


IT leadership: How to lead with strategic clarity

Business leaders who have a complete understanding of where their people and activities are situated in their overall strategy have “strategic clarity.” These leaders are invaluable – they can fill their team’s day with meaningful work, burst through information silos, synergize teams across departments, and articulate every activity’s role in achieving their goals. By connecting every department, goal, person, and task to a shared vision, businesses can unlock their full potential. While establishing strategic clarity might sound complicated, with the right framework, it’s quite intuitive. Here’s how you can accomplish this using the strategic clarity map. The strategic clarity map is a simple but powerful diagram that helps IT leaders define their current and desired situations and determine how to move from current to future states. ... The present-to-future bar is split in the middle. What makes the difference between your present state and actualizing your goals is how you put this next part into action.


Responsible AI at Amazon Web Services: Q&A with Diya Wynn

AWS has a broad strategy. We have a commitment to transforming theory into actions. This means how we’re changing and influencing the way that we build our services and the work that I do with customers — engaging and helping them bring that practice to life, operationalize it inside their organizations. We invest in education and training to create a more diverse future workforce. There’s an AI/ML scholarship program that’s bringing in those that typically might have been underrepresented to help them study artificial intelligence and machine learning. We also focus on training and educating those who are part of the product and machine learning lifecycle because they need to understand and be aware of potential areas of risk and how we mitigate them. The last area from a company perspective is about how we invest in advancing the science around responsible AI. We’ve made huge investments and continue to work with institutions, we have scholarships or research grants that are being provided in the way of NSF that are helping to encourage research in the area of responsible AI.


What Are Cloud-Bound Applications?

Broadly, we can group the way an application binds with its surroundings into two categories. Compute bindings are all the necessary bindings, configurations, APIs, and conventions used to run an application on a compute platform such as Kubernetes, a container service, or even serverless functions (such as AWS Lambda). Mostly, these bindings are transparent to the internal architecture, and configured and used by operations teams rather than developers. The container abstraction is the most widespread “API” for application compute binding today. ... Integration bindings is a catch-all term for all other bindings to external dependencies that an application is relying upon. The cloud services also use these bindings to interact with the application, usually over well-defined HTTP “APIs,” or specialized messaging and storage access protocols, such as AWS S3, Apache Kafka, Redis APIs, etc. The integration bindings are not as transparent as the runtime bindings.



Quote for the day:

"Taking charge of your own learning is a part of taking charge of your life, which is the sine qua non in becoming an integrated person." -- Warren G. Bennis

Daily Tech Digest - March 10, 2023

Why So Much Open Source Software Is Vulnerable to Hackers

A high-risk vulnerability is defined by the Cybersecurity Research Center this way, McGuire said: “They take the advisories from numerous (industry) security feeds, analyze them and send them out to our customers. And as part of this analysis, they assign severity scores. When it comes to open source vulnerabilities, they’re using the CVSS scoring system. It (severity) also depends on whether or not there’s an exploit; whether or not there is a fix available; the type of exploit; how easy it is for somebody to go through and actually exploit the application; whether this can be done remotely; and whether you have access to the running instance. So all these (attributes) are taken into consideration for that score. And then that score is what tells us whether or not it’s a high-severity vulnerability,” McGuire said. Jason Schmitt, general manager of the Synopsys Software Integrity Group, said that the report findings underlined the reality of open source as the underlying foundation of most types of software built today. 


Building An Automation Strategy: A Leadership Quandary

A digital transformation is a colossal effort for an organization and it has multiple parts to it. From legacy modernization, cloud migration, hybrid development and enterprise data management to automation and reporting, everything can fall under the purview of digital transformation. Leaders should know when to take a sequential approach and what needs to be done in parallel for all of these efforts to converge at some point. A digital transformation strategy can't be restricted to the board room alone with outside consultants and CXOs involved, lacking participation from department heads and leaders who are aware of the factors that contribute to inefficiencies and delays. A bottom-up approach to digital transformation is critical, as it can help in identifying priorities, including which departments need automation, the scope of automation for each department, potential use cases, projected returns and more. This requires leaders to spend time at a grassroots level, explaining their vision and ensuring they have organizational support in turning their digital goals into reality.


Marketing Compliance in 2023: What CPOs Need to Know

Consent refers to the compliance measures taken to abide by laws such as the European Union’s General Data Protection Regulation law and the various privacy laws in the United States. In order to be in compliance with the law, companies must obtain permission to collect consumer data and track consumer activity across the internet. We most often see this play out online in the form of a pop-up that appears when one visits a website asking a visitor to ‘accept’ or ‘decline’ cookies. Another example is the ‘opt in to communications’ box one checks when sharing an email address with a company. Consumer consent is markedly different than preferences because it requires consumers to give permission for companies to communicate with them and track their activity online. Consent varies, however, between different laws; for example, in the E.U., consumers are required to opt in to cookie tracking, whereas in the United States, consumers would need to object. All consent laws, however, require companies to make a consumer’s data available to them upon request.


8 ways to retain top developer talent

DX takes DevOps to the next level. As Guillermo Rauch, CEO and founder of Vercel told me, “Organizations will move from DevOps to dev experience. Great developer experience leads to better developer productivity and improved developer velocity, directly improving your bottom line. Every organization should be thinking, ‘How do I empower my developers to spend more time on the application and product layer while spending minimal time on the backend and infrastructure layer?’” ... Developers create software for two audiences: users and developers — that is, those developers who will work on the product. For users, product excellence is critical. But for developers, excellence inside the product is extremely important as well, and that has big implications for the business using the software. In this sense, DX is an indication of code quality, which says everything about the viability of software. Here, the importance to the business is two-fold. First, systems with good DX are easier to maintain and extend, with software quality a key differentiator between code that can grow and evolve and code that is doomed to degrade and decay.


Stolen credentials increasingly empower the cybercrime underground

"Unlike most modern organizational security teams, threat actors do not operate in silos, and instead pool resources while learning from one another," the company said. "Flashpoint is finding that adept threat actors and ransomware gangs increasingly share code, in addition to tactics, tools, and procedures—largely thanks to the proliferation of illicit markets." Just like ransomware gangs come and go in what seems like a never-ending cycle of rebranding, illegal markets do, too. While there were several law enforcement takedowns or self-shutdowns of big and long-running cybercrime markets -- SSNDOB, Raid Forums, and Hydra being some notable ones -- others quickly popped up to take their place. Cybercriminals usually maintain alternative communication channels like Telegram, where they can keep each other informed and advertise new alternative markets after one disappears. In fact, just last year Flashpoint recorded 190 new illicit markets emerge. 


Darktrace warns of phishing scam powered by ChatGPT

According to Darktrace, there has been a rise in cybercriminals using ChatGPT to create more personalised and authentic-looking phishing emails in an attempt to breach users’ finances, since the chatbot was released last November, reported The Guardian. However, it’s claimed that there isn’t so much a new wave of attackers targeting businesses and individual users with phishing techniques, as there is a shift in tactics using the Microsoft–backed software. Common features within the emails include “linguistic complexity, including text volume, punctuation and sentence length”, while techniques relying on malicious links in the text are decreasing. “We’re seeing a big shift. ‘Hey, guess what, you’ve won the lottery…’ emails are becoming a thing of the past,” Darktrace CEO Poppy Gustafsson told The Times. “Instead, phishing emails are much more about trying to elicit trust and communication. They’re bespoke, with much more sophisticated language — the punctuation is changing, the language is changing. It’s more about trying to elicit trust.”


5 tips for designing a cloud-smart transformation strategy

Don't just "lift and shift" everything as it is during your transition to the cloud. This approach might seem easy but it risks keeping previous mistakes, blunders, and problems in your program. Instead, reconsider everything. Keep what works, replace what doesn't, and discard the rest. Then migrate only the components that your business requires. We have retired (and continue to retire) many workloads in our transition to a hybrid cloud. The traditional practice of "lift and shift" is giving way to a more methodical and strategic approach to modernization. It's a move motivated by years of hard lessons learned and tears shed during previous cloud implementations. Recognize that workloads are inextricably linked and have highly complex dependencies. You can't just move any job to the cloud at random because that might break something. Even if the long-term goal is to move all workloads to the cloud at the same time, containerization and orchestration provide a useful hybrid option for achieving reasonable levels of flexibility and performance.


Synthetic identity fraud calls for a new approach to identity verification

What can be done to tackle the scourge of synthetic identity fraud? At the industry level, lenders and credit bureaus must come together to develop a standard approach for identifying, classifying and reporting synthetic identities. Targeted businesses also need to share data, as criminals use their synthetic identities at (and often defraud) many different organizations as they build their credit histories. Forming a consortium to share intelligence could bring suspicious patterns of activity to light sooner, likely reducing the risk of massive losses. On an organizational level, a multipronged detection strategy is highly recommended. Enterprises need to implement identity verification solutions that combine online and offline data to comprehensively examine risk signals, such as device behavior biometrics, device identity and reputation, email tenure and reputation, mobile phone tenure and reputation, usage patterns of personally identifiable information, and tenure and activity on social media platforms.


US Intelligence Ranks China as Top National Security Threat

The big-picture challenge with China, the report says, is its "capability to directly attempt to alter the rules-based global order in every realm and across multiple regions, as a near-peer competitor that is increasingly pushing to change global norms and potentially threatening its neighbors." The U.S. intelligence report also singles out Beijing for its willingness to use cyber operations and economic espionage to advance its domestic technology capabilities and knowledge and as a domestic and foreign lever to expand the Chinese Communist Party's "technology-driven authoritarianism globally." The country controls key supply chains - for batteries, critical minerals, pharmaceuticals, less advanced semiconductors and solar panels - which Chinese President Xi Jinping in 2020 said the country wouldn't hesitate to use for economic and political gain if required. The intelligence assessment says that this could include cutting off supply to other countries in a time of crisis.


IT leadership: 3 ways to boost your Generational IQ (GQ)

People need to feel heard. Across generations, people put a high value on recognition and respect. As a leader, make sure you have processes that allow you to listen more than you talk. Create plenty of opportunities to show appreciation and respect through both informal recognition processes as well as through formal rewards programs. Leaders who do enough listening (and ask the right questions) can weave generational preferences into how they acknowledge and recognize individuals. With the huge amount of diversity not only across but within generations, using surveys to identify those preferences is a smart way to test what works. A great example of this is customizing how you recognize your team after a big project goes live. A Millennial may value a gift card to use for a trip, while a Boomer may find more value in having their wisdom documented in an online training course for future team members.



Quote for the day:

"A good leader can't get too far ahead of his followers" -- Franklin D. Roosevelt