Daily Tech Digest - December 24, 2021

A CIO’s Guide To Hybrid Work

CIOs reimagining an organization’s digital strategy need to ensure that their employees can communicate effectively and have complete access to resources needed to perform their jobs. This means that employees do not receive just their laptops and an email account but have full access to a complete tech stack and set of solutions that empower them to interact with their peers and customers. AI- and ML-powered solutions help enhance the employee experience by saving time for people to connect with their teams and helping infuse mental well-being along with a company’s values and purpose. The best way to understand whether your employees are well supported to carry on their job is by gathering feedback from them. Send out a simple form with both open and closed questions on the potential communication gaps, remote work support and access to available resources. Once you have all the information, analyze the gaps and improvement opportunities to pick the right tools. Make sure that the tools you choose integrate with your organization’s tech ecosystem while delivering value.


Whatever Happened to Business Supercomputers?

Supercomputers are primarily used in areas in which sizeable models are developed to make predictions involving a vast number of measurements, notes Francisco Webber, CEO at Cortical.io, a firm that specializes in extracting value from unstructured documents. “The same algorithm is applied over and over on many observational instances that can be computed in parallel," says Webber, hence the acceleration potential when run on large numbers of CPUs.” Supercomputer applications, he explains, can range from experiments in the Large Hadron Collider, which can generate up to a petabyte of data per day, to meteorology, where complex weather phenomena are broken down to the behavior of myriads of particles. There's also a growing interest in graphics processing unit (GPU)-and tensor processing unit (TPU)-based supercomputers. “These machines may be well suited to certain artificial intelligence and machine learning problems, such as training algorithms [and] analyzing large volumes of image data,” Buchholz says.


The State of Hybrid Workforce Security 2021

The time is right for IT leaders to turn to their teams and gain a clear understanding of what they actually have in place. While the initial response to the pandemic was reactionary, now is a moment to assess an organization’s app and security landscape and what is actually providing access to users no matter where they are, whether they’re at home, in the branch, or anywhere in between. Rationalizing the purpose and usage of solutions that are in place today provides a real opportunity for consolidation—one that did not seriously exist previously. Many organizations will be able to drive better outcomes around security posture, reducing risk, and improving total cost of ownership. Consolidating the number of disparate tools in use to provide secure user access improves security posture consistency and reduces the number of policies that have to be administered. Besides reducing needed multi-product training and management effort, a platform approach drives better economies of scale, resulting in a lower total cost of ownership. Net-net, consolidation delivers a far more effective approach for security.


What is Web3, is it the new phase of the Internet and why are Elon Musk and Jack Dorsey against it?

In the Web3 world, search engines, marketplaces and social networks will have no overriding overlord. So you can control your own data and have a single personalised account where you could flit from your emails to online shopping and social media, creating a public record of your activity on the blockchain system in the process. A blockchain is a secure database that is operated by users collectively and can be searched by anyone. People are also rewarded with tokens for participating. It comes in the form of a shared ledger that uses cryptography to secure information. This ledger takes the form of a series of records or “blocks” that are each added onto the previous block in the chain, hence the name. Each block contains a timestamp, data, and a hash. This is a unique identifier for all the contents of the block, sort of like a digital fingerprint. ... The idea of a decentralised internet may sound far-fetched but big tech companies are already betting big on it and even assembling Web3 teams.


Will A.I. Guarantees Our Humane Futures?

Both private firms and governments, which would be adopting A.I. drove technologies, could be attracted to the opportunity of violating the individual’s privacy and data security for their own selfish reasons. Large private corporations, especially technology and social media companies such as the big four of the big tech, which includes Google, Amazon, Apple, and Facebook, they’re already sitting on massive quantities of user data, which they’re looking to monetize, and such monetization of data in the name of customized services and targeted advertisements could have a disastrous impact on the user’s privacy and data security. The bigger threat will emerge when such sensitive user data is misused for social engineering to alter the customer's behavior and choices. ... Today, algorithms are so sophisticated that they can predict the user's next action based on their private data analysis. It’s very much possible to make use of such user data to nudge the individual discretely to alter his behavior and choices, and this has far-reaching implications for the economy, for society, and as well as for the security of a democratic nation.


Protection against the worst consequences of a cyberattack

Businesses need an incident response plan that will clearly outline the steps to be followed when a data breach occurs. By neglecting to do so, the organization will become the low hanging fruit that attackers go after. Even a rudimentary plan is better than no plan at all, and those without one will suffer a much higher impact. The incident response plan needs to outline the steps to be followed when a data breach occurs. Teams need to identify and classify data to understand what levels of protection are needed, a step that is regrettably missed all the time. For instance, personal identifiable customer information needs a different level of protection to the photos from the last Christmas party. Teams also need to maintain cyber hygiene through regular patching, and since 90% of breaches start with an email, it is very important to have email protection, multi-factor authentication and end-point protection to prevent any lateral movements by cybercriminals. Perhaps my biggest piece of advice is to have experienced personnel monitoring your environment 24/7, 365 days a year (including Christmas). 


Initial access brokers: How are IABs related to the rise in ransomware attacks?

Initial access brokers sell access to corporate networks to any person wanting to buy it. Initially, IABs were selling company access to cybercriminals with various interests: getting a foothold in a company to steal its intellectual property or corporate secrets (cyberespionage), finding accounting data allowing financial fraud or even just credit card numbers, adding corporate machines to some botnets, using the access to send spam, destroying data, etc. There are many cases for which buying access to a company can be interesting for a fraudster, but that was before the ransomware era. ... Ransomware groups saw an opportunity here to suddenly stop spending time on the initial compromise of companies and to focus on the internal deployment of their ransomware and sometimes the complete erasing of the companies' backup data. The cost for access is negligible compared with the ransom that is demanded of the victims. IAB activities became increasingly popular in the cybercriminal underground forums and marketplaces. 


8 Real Ways CIOs Can Drive Sustainability, Fight Climate Change

The concept of the circular economy has been around for a while, but it’s now taking off in a big way. NTT’s Lombard says that it’s a key to getting to net zero. This means establishing business and IT supply chains that focus on optimizing the lifespan of equipment, moving toward zero-emission closed loop recycling and curtailing e-waste. For example, there’s a growing second-hand market for high-end gear, including hyperscale infrastructure. Companies like IT Renew recertify these systems and place them under warranty. “Everyone wins,” says Lucas Beran, principal analyst at consulting firm Dell’Oro Group. “The original user gets two or three years of use; the buyer gets another three or four years -- all while TCO and the carbon footprint drop.” ... Data centers are expected to consume about 8% of the world's electricity by 2030. While refreshing legacy servers, optimizing data, virtualizing workloads, consolidating virtual machines and green hosting all deliver benefits, these strategies aren’t enough to tackle climate change. Organizations must fundamentally rethink data center design and function.


How Safety Became One of The Most Critical Smart City Applications

For cities, it can be challenging to ensure citizen and worker safety when natural disasters occur. Incidents such as hurricanes, floods, fires and gas leaks are unpredictable and often impossible to prevent. To put it in perspective, most people have lived through some disaster, with 87% of consumers saying they’ve been impacted by one in the last five years (not counting the COVID pandemic). Safety will only become more critical over the next few decades as natural disasters are becoming more frequent, intense and costly. Since 1970, the number of disasters worldwide has more than quadrupled to around 400 a year. Since 1998, natural disasters worldwide have killed more than 1.3 million people and left another 4.4 billion injured, homeless, displaced, or in need of emergency assistance. Smart sensors and advanced analytics can help communities better predict, prepare and respond to these emergency situations. For example, IoT sensors, such as pole tilt, electric distribution line, leak detection and air quality sensors, can be leveraged to mitigate risk minimize damage.


Avoiding Technical Bankruptcy: a Whole-Organization Perspective on Technical Debt

It is regrettable that the meaning of the technical debt metaphor has been diluted in this way, but in language as in life in general, pragmatics trump intentions. This is where we are: what counts as "technical debt" is largely just the by-product of normal software development. Of course, no-one wants code problems to accumulate in this way, so the question becomes: why do we seem to incur so much inadvertent technical debt? What is it about the way we do software development that leads to this unwanted result? These questions are important, since if we can go into technical debt, then it follows that we can become technically insolvent and go technically bankrupt. In fact, this is exactly what seems to be happening to many software development efforts. Ward Cunningham notes that "entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation". That stand-still is technical bankruptcy.



Quote for the day:

“When you take risks you learn that there will be times when you succeed and there will be times when you fail, and both are equally important.” -- Ellen DeGeneres

Daily Tech Digest - December 23, 2021

Top 6 trends in data and analytics for 2022

A data fabric is an architecture that provides visibility of data and the ability to move, replicate and access data across hybrid storage and cloud resources. Through near real-time analytics, it puts data owners in control of where their data lives across clouds and storage so that data can reside in the right place at the right time. IT and storage managers will choose data fabric architectures to unlock data from storage and enable data-centric vs. storage-centric management. For example, instead of storing all medical images on the same NAS, storage pros can use analytics and user feedback to segment these files, such as by copying medical images for access by machine learning in a clinical study or moving critical data to immutable cloud storage to defend against ransomware. Many organizations today have a hybrid cloud environment in which the bulk of data is stored and backed up in private datacenters across multiple vendor systems. As unstructured (file) data has grown exponentially, the cloud is being used as a secondary or tertiary storage tier. It can be difficult to see across the silos to manage costs, ensure performance and manage risk. 


2022 technology trend review, part one: Open source, cloud, blockchain

Blockchain platforms are by and large open source too, but although data-related, theirs is a different story. Let's get that out of the way: was 2021 a breakout year for blockchain? No, not really. Will 2022 be a breakout year for blockchain? Probably not. But that's not the point. Blockchain's sudden rise to stardom in 2017 was rather abrupt and premature. The concepts and the technology are still under development, while mainstream adoption is still tentative. To speak in hype cycle terms, blockchain is going through the Trough of Disillusionment. But that does not mean it's without significance. To reiterate: the transformational potential is there, but there's still a long way to go, both on the technical and on the organizational and operational side of things. In 2020, blockchain-powered DeFi rose to prominence. In 2021, DeFi hit the reality wall. DeFi stands for Decentralized Finance. In short, DeFi's promise is to be able to cut out middlemen from all kinds of transactions. In 2020, DeFi saw lots of growth, some of it warranted, we noted last year.


Best of 2021 – 7 Popular Open Source CI/CD Tools

Argo CD is a CI/CD tool for Kubernetes development. It is an open source project which is currently in the incubation status at the Cloud Native Computing Foundation (CNCF). It uses Git repositories to store the state of Kubernetes applications, monitors applications and can resync clusters to the desired state, as represented in the git configuration. This innovative approach also allows you to store multiple desired states of a Kubernetes application, using branches, tags, or by pinning manifest versions using a Git commit. This provides a flexible environment for managing Kubernetes configurations during the development process. ... CircleCI is an open source CI/CD tool. It includes features for job orchestration, resource configuration, caching, debugging, security and dashboard reports. CircleCI integrates with a variety of tools, including GitHub, Heroku, Slack and Docker. CircleCI is available in three tiers, one of which is free. You can use it in the cloud or on-premises with Linux, Mac or Windows machines. 


Managing state with Elf, a new reactive framework

Elf is a reactive and immutable state management library built on top of RxJS. Elf provides us with a wide arrange of tools to manage our state. Because of this, there is some terminology we should know, like observables, observers, and subscriptions. Observables are objects that can emit data over a period of time. They function as wrappers around data sources or stream of values. Observers are consumers of the data observables store. They execute a piece of code if the data being observed is mutated or if an error occurs, and react to state changes. They also implement up to three methods: next, error, and complete. We will not look at these in detail because they are specific to RxJS and therefore beyond the scope of this article. Subscriptions are how we connect observers to observables. Observers subscribe to observables, watch for any changes in the data, and react to those changes. ... Elf entities are unique types of Elf stores. Entities act in the same manner as tables in a database, and we can store large collections of similar data in entities.


FBI: Another Zoho ManageEngine Zero-Day Under Active Attack

The bug is the third zero-day under active attack that researchers have discovered in the cloud platform company’s ManageEngine suite since September, spurring dire warnings from the FBI and researchers alike. Though no one has yet conclusively identified the APT responsible, it’s likely the attacks are linked and those responsible are from China, previous evidence has shown. Earlier this month, researchers at Palo Alto Networks Unit 42 revealed that state-backed adversaries were using vulnerable versions of ManageEngine ServiceDesk Plus to target a number of U.S. organizations between late October and November. The attacks were related to a bug revealed in a Nov. 22 security advisory by Zoho alerting customers of active exploitation against newly registered CVE-2021-44077 found in Manage Engine ServiceDesk Plus. The vulnerability, which allows for unauthenticated remote code execution, impacts ServiceDesk Plus versions 11305 and below. 


Vulnerabilities to fraud are increasing across the board

In a phenomenon McKinsey referred to as The Quickening, e-commerce saw more than a decade’s worth of growth in the first quarter of 2020, as more consumers than ever before turned to digital solutions. According to media regulator Ofcom, UK adults spent an average of three hours and 47 minutes online every day during the pandemic, prompting an increase in the number of personal accounts for banking, financial services, e-commerce shopping and media streaming. As logins soared, so did the opportunities for fraud. While new account opening fraud remains the most popular form of automated attack across the customer journey, with one in 11 transactions in the Digital Identity Network estimated to be an attempt, overall this attack vector fell 10% YoY. A corresponding growth of 52% in login attacks and an 18% growth in payment attacks – testing stolen card credentials – reinforces the hypothesis that fraudsters are automating attacks to test the validity of stolen credentials on an industrial scale.


3 Meaningful KPIs to Focus Agile Development, DevOps, and IT Ops to Deliver Business Outcomes

Speed without guard rails and safety can lead to disastrous crashes – but stagnation and creating bureaucracy-driven change processes that slow the delivery of innovation, new capabilities, and improvements can lead to disruption. Whether you are agile, DevOps, or IT Ops-centric, we’re all trying to deliver positive business outcomes through transformation management. And change failure rates is the first indicative KPI of how well IT performs in delivering business outcomes. When change failure rates are high, IT has to slow down and fix things, while business stakeholders lose trust in IT. And that’s just the start of impacts because change failures can lead to outages, security issues, and other major incidents. A measurement is only as good as its ability to lead to action. Using an AIOps platform to improve root cause analysis by correlating incidents to the changes that caused them is a best practice for identifying systemic causes and helping reduce change failure rates.


The Future of Banking When a ‘New Normal’ Has Yet to be Defined

Everything that we thought was going to be the future in 2030, ended up just being how we get through the next 12 months. This means that we now need to reset our expectations about what innovation really looks like, because no one’s impressed with you having a mobile app anymore or having a digital channel, or having some level of automation, or accepting digital signatures. Let’s face it, if you hadn’t figured out how to do these things in the most recent period, you’re probably no longer in business. ... The ‘Great Resignation’ is actually accelerating automation. Since no one can find people to work, they’re doubling down on automation, artificial intelligence, and machine learning. When more people reenter the workforce, we need to start to define what a human’s good at and what a machine’s good at. Part of being an effective human leader or worker in the future will be the ability to constantly reinvent yourself. Likewise, the key component of someone you want to look for in the future, is the ability to destroy their own job


Transforming government software development and digital services

There are a plethora of country-specific laws and digital government initiatives that aim to rethink public sector IT. One example of the collaborative approach mentioned earlier is Germany’s Online Access Act which aims to bring together the country’s 16 federal states and 11,000 local governments under one digital banner. This means that all services offered at federal, state and local level are to be accessible online via their own portals, with these portals linked within a network. With a digital account, citizens can reach all federal, state and local services from this network in just three clicks. To enable this, uniform IT standards and interfaces are necessary across the board. Another interesting development is the public sector taking cues from Silicon Valley to become more efficient, moving from a bureaucratic culture to a generative one. One example of this is Kessel Run, which aims to revolutionise the software acquisition process for the United States Air Force (USAF). 


Combating Synthetic ID Fraud in 2022

Technologies such as machine learning are also being used by security vendors to fight against SIF. "SIF’s use of machine learning is largely what makes it effective at bypassing legacy fraud detection systems. Needless to say, banks can use the same technology to identify these attacks. However, despite having multiple vendors out there claiming to leverage machine learning techniques, financial institutions have so far failed to combat SIF," says People’s United Bank's Boyer. Boyer says financial institutions are not using these technologies in the right manner. "Financial institutions need to start using machine learning techniques correctly. Many businesses have a 'set it once and forget it' approach. There has to be some kind of human interaction to differentiate between fraud and legitimate transactions." And vendors must change their approach too, she says. "Vendors are checking personally identifiable information that has been used previously to verify its legitimacy. 



Quote for the day:

''Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - December 22, 2021

Cybersecurity spending trends for 2022: Investing in the future

Despite the steady state of funding, CISOs aren’t going to be flush with cash. Security leaders and executive advisors say security departments must continue to show that they’re delivering value for the dollars spent, maturing their operations, and, ultimately, improving their organization’s security posture. “Organizations know that risks are increasing every day, and as such, investments continue to pour into cybersecurity,” says Joe Nocera, leader of PwC’s Cyber & Privacy Innovation Institute. “We’re hearing from business leaders that they’d be willing to spend anything to not end up on the front page of a newspaper for a hack, but they don’t want to spend a penny more than is necessary and they want to make sure they’re spending their money in the right areas. That’s going to require the CEO and CISOs to work together. CISOs need to know what the right level of protection is.” Nocera adds: “Cyber investments are becoming less about having the latest products from tech vendors and more about first understanding where the business is most vulnerable, then prioritizing investments by how likely an attack will occur and how substantial that loss could be to the business.”


Why CISOs Shouldn’t Report to CIOs in the C-Suite

A very common complaint I hear from CISOs is that they do not receive the resources they need to secure their enterprises. While some companies understand how and where the CISO fits into the leadership structure, the majority do not. One individual that works for a local government told me he took a position as a CIO rather than a CISO because he “knew the CISO role was that of a fall guy.” He believes he was only offered the CISO position because the CIO wanted someone to blame if things went badly. This example clearly shows the conflict of interest that exists when a CISO reports to a CIO. One CISO working in the industrial market told me that there’s an “inherent tension between me and others that report to the CIO.” This frequently occurs due to the trade-off between security and efficiency, which impacts business units throughout an enterprise. When manufacturing wants to continue running a legacy system with outdated software and the CISO says no, this impacts revenue. 


Why Do We Need An Agile Finance Transformation

Embracing agility strategically and tactically while encouraging a fail-fast environment ensures teams have adaptable processes, collaborative mindsets, and a bias for continuous improvement. An agile finance function is prepared to provide assurance for financial results and contribute to strategic decisions in the face of evolving market conditions, the accelerated pace of change, and the introduction of unforeseeable circumstances. CFOs, controllers, finance and accounting professionals, and students alike are, therefore, encouraged to develop agile and scrum expertise to elevate individual, functional, and organizational performance, further strengthening the finance function’s value proposition for decades to come. Utilizing agile and scrum to redefine approaches to core activities like financial planning and analysis, internal audit, and financial close can position management accountants to better support the unprecedented number of transformation initiatives organizations embark upon today. Further, the agile finance function can realize elevated outcomes, maximized value, and expedited delivery, enabling their organizations to adapt to changing priorities with agility and data-backed insights.


Mozilla patches critical “BigSig” cryptographic bug: Here’s how to track it down and fix it

Many software vendors rely on third-party open source cryptographic tools, such as OpenSSL, or simply hook up with the cryptographic libraries built into the operating system itself, such as Microsoft’s Secure Channel on Windows or Apple’s Secure Transport on macOS and iOS. But Mozilla has always used its own cryptographic library, known as NSS, short for Network Security Services, instead of relying on third-party or system-level code. Ironically, this bug is exposed when affected applications set out to test the cryptographic veracity of digital signatures provided by the senders of content such as emails, PDF documents or web pages. In other words, the very act of protecting you, by checking up front whether a user or website you’re dealing with is an imposter …could, in theory, lead to you getting hacked by said user or website. As Ormandy shows in his bug report, it’s trivial to crash an application outright by exploiting this bug, and not significantly more difficult to perform what you might call a “controlled crash”, which can typically be wrangled into an RCE, short for remote code execution.


Zero Trust Shouldn’t Mean Zero Trust in Employees

An effective zero trust experience works for and empowers the employee. To them, everything feels the same — whether they're accessing their email, a billing platform, or the HR app. In the background, they don't have broad access to apps and data that they don't need. This comes down to building a well-defined and measurable "circle of trust" that is granted to an employee based on their role and team. With these guardrails in place, you're removing the friction and providing a good user experience while establishing more effective security. Security teams must be able to clearly and reliably enforce a trust boundary that's extended to employees based on what they need to get their jobs done. From there, zero trust is about building out those guardrails so that the trust boundary is maintained. No more, no less. Zero trust should be implemented across the entire HR life cycle, especially when staffing shortages and the Great Resignation have caused hiring and turnover fluctuations.


Understanding Black Box Testing - Types, Techniques, and Examples

To ensure that the software quality is maintained and you do not lose customers because of a bad user experience, your application should go through stern supervision using suitable testing techniques. Black box testing is the easiest and fastest solution to investigate the software functionalities without any coding knowledge. The debate on white box vs. black-box testing is an ever-prevailing discussion, where both stand out as winners. Whether you want White box testing or Black box testing depends upon how deeper you want to get into the software structure under test. If you want to test the functionalities with an end-user perspective, Black box testing fits the bill. And, if you wish to direct your testing efforts towards how the software is built, its coding structure, and design, then white box testing works well. However, both aim to improve the software quality in their own different ways. There are a lot of black-box testing techniques discussed above. 


CIO priorities: 10 challenges to tackle in 2022

From robotic process automation to low-code technologies, there's a whole suite of tools that claim to make the application development process easier. However, automation should come with a warning: while these tools can lighten the day-to-day load for IT teams, someone somewhere must ensure that new applications meet stringent reliability and security standards. Increased automation will mean IT professionals spend more time engaging and overseeing, so focus on training and development to ensure your staff is ready for a shift in responsibility. With all the talk of automation and low-code development, it would be easy to assume that the traditional work of the IT department is done. Nothing could be further from the truth. Yes, the tech team is set to change, but talented developers – who work alongside their business peers – remain a valuable and highly prized commodity. To attract and retain IT staff, CIOs will need to think very hard about the opportunities they offer. Rather than being a place to go, work is going to become an activity you do in a collaborative manner, regardless of location. 


Cloud numbers don’t add up

The problem is aligning ambition with reality. It’s perhaps also a weirdness in the definition of “cloud native.” The Cloud Native Computing Foundation defines “cloud native” as enabling enterprises to “build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” There’s nothing particularly modern about a private cloud/data center. Scott Carey has described it thus: “Cloud native encompasses the various tools and techniques used by software developers today to build applications for the public cloud, as opposed to traditional architectures suited to an on-premises data center” (emphasis mine). If going cloud native simply means “doing what we’ve always done, but sprinkled with containers,” that’s not a very useful data point. “Cloud first,” however, arguably is. If we’re already at 47% of respondents saying they default to cloud (again, my assumption is that people weren’t thinking “my private data center” when answering a question about “cloud first”), then we have a real problem with measured spend on cloud computing from IDC, Gartner, and even the most wide-eyed of would-be analyst firms.


The Dark Web: a cyber crime bazaar where data is a hot commodity

Everyone is aware of the Dark Web’s reputation as a playground for cyber criminals who anonymously trade stolen data and partake in illegal activities. While in the past it required a degree of technical knowledge to transact on the Dark Web, in recent years the trading of malware and stolen data has become increasingly commoditised. As a result, marketplaces, hacker forums and ransomware groups sites are proliferating. Bitglass recently conducted some research that shines some light on exactly how Dark Web activity, the value of stolen data, and cyber criminal behaviours have rapidly evolved in recent years. What we found should trigger alarm bells for enterprises that want to prevent their sensitive data from ending up on the Dark Web. Back in 2015, Bitglass conducted the world’s first data tracking experiment to identify exactly how data is viewed and accessed on the Dark Web. This year we re-ran the experiment and embellished it, posting fake account usernames, emails and passwords that would supposedly give access to high-profile social media, retail, gaming, crypto and pirated content networks acquired through well-known breaches.


Disaster preparedness: 3 key tactics for IT leaders

Once risks are identified and impacts are evaluated and scored, implement an appropriate risk response. This includes risk treatment options to accept the risk, mitigate the risk with new or existing controls, transfer the risk to third parties – often with insurance or risk sharing, or avoid the risk by ceasing the business activity related to it. A risk assessment can be coupled with a business impact analysis (BIA) that provides input into business continuity and disaster planning. A BIA identifies recovery time objectives (RTOs), recovery point objectives (RPOs), critical processes, dependence on critical systems, and many other areas. It gets to the 80/20 rule where rather than create costly recovery strategies for 100 percent of all critical business functions, you want to focus on the 20 percent of the business processes that are the most critical and need to be recovered quickly in a disaster event. Once a BIA is completed, organizations can determine their recovery strategies to maintain continuity of operations during a disaster. Business continuity plans should be based on the BIA and updated at least every year.



Quote for the day:

"Tact is the ability to make a person see lightning without letting him feel the bolt." -- Orlando A. Battista

Daily Tech Digest - December 21, 2021

Everyone likes to talk sustainability, but who takes responsibility?

It’s no longer enough to rely upon a small pool of employees to drive, inform and implement widespread change. Meeting these ambitious targets will only be possible if they are accompanied by a top-down mentality to change alongside a groundswell of employee support. Ultimately, responsibility for change needs to fall under the remit of the entire workforce, not just one individual. At the heart of this is ensuring that the sustainability function is not siloed from the rest of the business, acting as its own separate entity with different KPIs or activations. To be successful, it needs to permeate the wider business and encourage others to embrace a ‘sustainability by design’ mindset with new policy, direction and solutions. At first, this might mean that meetings should have a dedicated sustainability champion, whether that is the CEO or Chief Sustainability Officer, as outlined in the above research, or another individual, who knowledge-shares, coaches other employees and ensures the business is on track against its targets. 


Log4j: Belgian Defense Ministry Reports it Was 'Paralyzed'

The ministry told the Belgian newspaper that the cyberattack stemmed from Apache's Log4j - which provides logging capabilities for Java applications and is widely used, including for Apache web server software. Belgian Commander Olivier Séverin also told the outlet, "All weekend our teams have been mobilized to control the problem, continue our activities and warn our partners." Taking to Facebook in the wake of this recent attack, the Ministry of Defense writes, "Due to technical issues, we are unable to process your requests via mil.be or answer your queries via Facebook. We are working on a resolution and we thank you for your understanding." Representatives for both the ministry and Defense Minister Ludivine Dedonder did not respond to Information Security Media Group's request for comment. Belgian officials also did not elaborate on the attack's specifics with De Standaard. The Belgian incident is one of the first high-profile attacks stemming from the Log4j vulnerability, although cybersecurity experts have warned of active scanning and exploitation of the remote code execution vulnerability.


Use of blockchain technology could increase human trust in AI

With advancements in technology, trust has become a vital factor in human-technology interactions. In the past, people trusted technology mainly because it worked as expected. However, the emergence of Artificial Intelligence solutions does not remain the same due to the following challenges: Openness: AI-based applications are built to be adaptive and reactive, to have an intelligence of their own to respond to situations. Anyone can put it to good use or apply it for nefarious purposes. Hence, people have some reservations about trusting AI-based solutions. Transparency: One of the significant issues impacting human trust in AI applications is the lack of transparency. AI developers need to clarify the extent of personal data utilized and the benefits and risks of using the application to increase trust. Privacy: AI has made data collection and analysis much easier; however, the end-users have to bear the brunt, as the collection of humongous amounts of data by companies worldwide may end up jeopardizing the privacy of the user(s) whose data is being collected.


Shifting security further left: DevSecOps becoming SecDevOps

With the rising cost and complexity of modern software development practices, businesses will increasingly require a comprehensive, fully integrated security platform with fewer disparate tools. This platform supports pervasive, or continuous, security because it: Starts in the design phase with threat modeling, ensuring that only secure components are incorporated into the design. This shifts security even further left so that DevSecOps now becomes SecDevOps ensuring software is ‘secure by design’. Is fully integrated, but also open to new technology plugins, to provide comprehensive coverage analyzing every possible dimension of the code. This ‘single pane of glass’ approach empowers security professionals and developers to understand risk, prioritize remediation efforts, and define and monitor progress objectives across multiple dimensions. Delivers a frictionless developer experience that enables security analysis to meet developers where they work – within the IDE, CI/CD pipelines, code and container repositories, and defect tracking systems.


DeepMind’s New AI With a Memory Outperforms Algorithms 25 Times Its Size

Bigger is better—or at least that’s been the attitude of those designing AI language models in recent years. But now DeepMind is questioning this rationale, and says giving an AI a memory can help it compete with models 25 times its size. When OpenAI released its GPT-3 model last June, it rewrote the rulebook for language AIs. The lab’s researchers showed that simply scaling up the size of a neural network and the data it was trained on could significantly boost performance on a wide variety of language tasks. Since then, a host of other tech companies have jumped on the bandwagon, developing their own large language models and achieving similar boosts in performance. But despite the successes, concerns have been raised about the approach, most notably by former Google researcher Timnit Gebru. In the paper that led to her being forced out of the company, Gebru and colleagues highlighted that the sheer size of these models and their datasets makes them even more inscrutable than your average neural network, which are already known for being black boxes.


5 rules for getting data architecture right

A number of cloud experts suggest that centralizing your application data is the right model for managing a large dataset for a large application. Centralizing your data, they argue, makes it easier to apply machine learning and other advanced analytics to get more useful information out of your data. But this strategy is faulty. Centralized data is data that can’t scale easily. The most effective way to scale your data is to decentralize it and store it within the individual service that owns the data. Your application, if composed of dozens or hundreds of distributed services, will store your data in dozens or hundreds of distributed locations. This model enables easier scaling and supports a full service ownership model. Service ownership enables development teams to work more independently, and encourages more robust SLAs between services. This fosters higher-quality services and makes data changes safer and more efficient through localization.


CISA Compliance for 2022

The fact that the Federal Government is suddenly placing such a high priority on cyber security is telling, and the directive is worth paying attention to, even for private sector organizations. If federal agencies shore up their cyber defenses in accordance with the new directive, then at least some cybercriminals will likely turn their attention toward attacking private sector targets. After all, it is likely that some of the known vulnerabilities will continue to exist in private companies, even after those vulnerabilities have been addressed on systems belonging to the federal government. With the end of the year rapidly approaching, IT professionals should put cyber security at the top of their New Year's resolutions. But what specifically should IT pros be doing to prepare for 2022? CISA differentiates between known vulnerabilities and vulnerabilities that are known to have been exploited. Likewise, IT pros in the private sector should focus their efforts and their security resources on addressing vulnerabilities that have been exploited in the real world. 


Major Algorithmic Breakthroughs Of 2021

In a major breakthrough, scientists have discovered an entirely different form of biological reproduction and applied it to create the first-ever, self-replicating living robots. This research was conducted by scientists at the University of Vermont, Wyss Institute for Biologically Inspired Engineering at Harvard University, and Tufts University. This team had created “Xenobots” last year and discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, look for single cells, gather them together and assemble “baby” Xenobots in their mouth. After a few days, these become new Xenobots that look and move just like themselves. ... 2021 has been a transformative year for large language models, with all the major names in tech bringing in path-breaking new systems. Just days back, DeepMind introduced a 280 billion parameter transformer language model called Gopher. DeepMind’s research went on to say that Gopher almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations.


Hybrid work model: 4 tips for teams in 2022

Use milestones and deadlines to gauge your team’s progress instead of tracking time. One challenge of remote work is “appearing” to be productive and present to the management team. However, measurement should not be seen as a punitive exercise to catch people out – it should guide employees toward completing their goals. Most workers don’t work the entire eight hours they’re in the office either, as they’re often engaging in spontaneous meetings and meaningful moments of connection with colleagues. Managers should disregard time as a measure of productivity and trust their employees to do their job to the best of their ability. If goals are being met but the employees feel distant because they don’t need to collaborate as much or that they need to “appear busy,” then the goals are too easy and need to be readjusted. Be careful to keep engagement and communication high – otherwise, you can end up with the “watermelon effect” – good “green” performance, but below the surface, there’s a big chunk of red, which represents a poor employee experience. 


The CEO’s Playbook for a Successful Digital Transformation

A crucial characteristic of successful digital CEOs is that they can step back far enough from their current business to reimagine where transformative — not incremental — value is possible. We find that these CEOs spend a lot of time visiting companies and staying abreast of how new trends are generating value. That helps them to look at their own assets with fresh eyes and see where there’s new value. Steve Timm, the president of Collins Aerospace, finds transformative value in being able to thoughtfully reimagine the business model. “Many CEOs have domain experience and they don’t want to get outside of that,” he told us during an interview. “They’re not thinking about redefining the broader architecture or ecosystem. We need to redefine the boundaries where value can come from.” With clarity on the business model established, targeting a domain — for example, a complete core process or user journey — has emerged as a critical element for focusing energies in a digital transformation. 



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - December 20, 2021

Top 5 Internet Technologies of 2021

Speaking of React, 2021 didn’t see any diminishment of the popular Facebook-derived JavaScript library. Although React-based frameworks abound, one in particular stood out this year: Next.js, the open source framework managed by Vercel. At the end of October, Vercel announced version 12 of Next.js, which included ES modules and URL imports, instant Hot Module Replacement (HMR), and something called “Middleware” that enables you to “run code before a request is completed.” Next.js is indicative of the rise of SSGs (Static Site Generators) over the past few years, with Gatsby and Hugo other examples. Although, there has been a noticeable move away from pure static generation — Next.js now describes itself as a “hybrid static [and] server rendering” framework. Next.js developers love its ease-of-use and all the fancy features (like “edge functions”), however not everyone is enamored with the output of Next.js-made apps. That’s perhaps more of an indictment of React itself, than Next.js. But it is worth noting that there is increasing pushback against React frameworks on the web, due to the amount of JavaScript they tend to use.


Why it’s time to rethink your cyber talent and retention strategy

Organisations that don’t invest in cyber skills training and development programmes for technical personnel and the wider workforce risk throttling their future internal talent marketplace. Today’s increasingly digital workplace means cyber security is everyone’s business. By extending cyber awareness and training to all employees, organisations will be able to mobilise those individuals that demonstrate aptitude and interest to build up their skills set and acquire industry-recognised certifications that will help the organisation expand and strengthen its cyber security teams. Alongside initiating a mentorship programme to support people make a ‘job shift’ into cyber security roles, organisations should look to facilitate defined cyber security career pathways. ... Many IT leaders are already active members of knowledge networks and communities, that present a rich seam of opportunity when it comes to virtually meeting and evaluating potential candidates who are an exact match for their business, in a highly targeted way.


On the Importance of Bayesian Thinking in Everyday Life

Surprisingly, there is no consensus as to what probability really means. In general, there are two ways to think about it. One is to define probability as the observed frequency of events in many trials. For instance, if one would toss a coin many times, approximately half of the outcomes will be heads, and the other half will be tails. The more tosses, the closer the observed frequencies will be to 50–50. Hence, we say that the probability of tossing heads (or tails) is 50%, or 0.5. This is the so-called frequentist probability. There is also another way to think about it, known as subjective or Bayesian probability. In a nutshell, this definition states that a person’s subjective belief about how likely something is to happen is also a probability. I might say: I think there is a 50% chance it will rain tomorrow. It is a valid statement of a Bayesian probability, but not of a frequentist one. ... Whichever definition of probability we adopt (and we will see both in action shortly), probability always follows certain rules. It is a number between 0 and 1 that expresses how certain something is to happen. 


The Future of Work is Not Corporate - It’s DAOs and Crypto Networks

As companies grow, they are no longer able to maintain a sustainable relationship with these orbital network participants. The relationship between the company and the participants turns zero-sum, and in order to maximize profits, the company begins to extract value from these participants. ... The model of a company having strict boundaries between internal and external may have made sense in the Industrial Age, but in the Information Age, this model leads to misaligned incentives and unsustainable extraction. In our world of complex information and orbital stakeholders, companies are no longer suited to help us coordinate our activity. Crypto networks create better alignment between participants, and DAOs will be the coordination layer for this new world. ... DAOs will eventually replace the traditional model. A DAO is an internet-native organization with core functions that are automated by smart contracts, and with people who do the things that automation cannot. In practice, not all DAOs are decentralized or autonomous, so it is best to think of DAOs as internet-based organizations that are collectively owned and controlled by its members.


The future is not the Internet of Things… it is the Connected Intelligent Edge

It is not surprising that Qualcomm is talking about it. At its recent Investor Day presentation, Amon shared how the company is uniquely positioned to drive the Connected Intelligent Edge: “We are working to enable a world where everyone and everything is intelligently connected. Our mobile heritage and DNA puts us in an incredible position to provide high-performance, low-power computing, on-device intelligence, all wireless technologies, and leadership across not only AI processing and connectivity but camera, graphics, and sensors. These technologies will scale to support every single device at the edge, from earbuds all the way to connected intelligent vehicles.” For Qualcomm, Amon sees this as an opportunity to engage a $700 billion addressable market in the next decade. Amon is not alone. ... “Qualcomm is a leader at the Intelligent Edge, driving advances in efficient computing, wireless connectivity and on-device AI. And your vision for a future of technology where everyone and everything is intelligently connected is aligned with our own,” Nadella said.


Measure Outcomes, Not Outputs: Software Development in Today’s Remote Work World

Lower productivity does not always mean that the developer lacks skills and is therefore inefficient. Comparing how much code was written to how much was moved into production provides some key insights. The first insight is whether or not the developer was working on features that are important to the business. Suppose the development team wrote a lot of code, but only a small amount made it to production. In such a scenario, it could mean they weren’t working on the right features because someone misunderstood the business priorities or spent a lot of time on prototyping. Secondly, it is possible that the product owner did not fully define the requirement and kept on changing it, resulting in code churn. Code churn measures the amount of code that was re-written for a feature to be done right. Code churn can happen because of a) inexperienced developers writing bad code, b) the developer’s poor understanding of the product requirements, or c) the product owner not defining the feature well leading to scope changes, or d) the prioritization of features not done right by the product owner.


Lights Out: Cyberattacks Shut Down Building Automation Systems

The firm, located in Germany, discovered that three-quarters of the BAS devices in the office building system network had been mysteriously purged of their "smarts" and locked down with the system's own digital security key, which was now under the attackers' control. The firm had to revert to manually flipping on and off the central circuit breakers in order to power on the lights in the building. The BAS devices, which control and operate lighting and other functions in the office building, were basically bricked by the attackers. "Everything was removed ... completely wiped, with no additional functionality" for the BAS operations in the building, explains Thomas Brandstetter, co-founder and general manager of Limes Security, whose industrial control system security firm was contacted in October by the engineering firm in the wake of the attack. Brandstetter's team, led by security experts Peter Panholzer and Felix Eberstaller, ultimately retrieved the hijacked BCU (bus coupling unit) key from memory in one of the victim's bricked devices, but it took some creative hacking.


What Log4Shell teaches us about open source security

Nearly every organization now uses some amount of open source, thanks to benefits such as lower cost compared with proprietary software and flexibility in a world increasingly dominated by cloud computing. Open source isn’t going away anytime soon — just the opposite — and hackers know this. As for what Log4Shell says about open-source security, I think it raises more questions than it answers. I generally agree that open-source software has security advantages because of the many watchful eyes behind it — all those contributors worldwide who are committed to a program’s quality and security. But a few questions are fair to ask: Who is minding the gates when it comes to securing foundational programs like Log4j? The Apache Foundation says it has more than 8,000 committers collaborating on 350 projects and initiatives, but how many are engaged to keep an eye on an older, perhaps “boring” one such as Log4j? Should large deep-pocketed companies besides Google, which always seems to be heavily involved in such matters, be doing more to support the cause with people and resources?


AI Comes Alive in Industrial Automation

AI and ML tools are getting used to predict future energy consumption patterns in manufacturing. This mitigates soaring energy costs and also helps offset climate change. AI also helps to sort out chaotic systems such as renewables. “Training these AI models is burning tons of energy. That’s not false. It does take energy,” said Nicholson. “But what people are missing is that AI models are designed to help companies with enormous physical systems operate more efficiently.” While AI takes up a lot of processing energy, the results in efficiency savings can far outweigh the expense in energy consumption. “AI can help us make more with less. We can cut down on waste with optimization. We can get growth without consuming more,” said Nicholson. “We can train an optimization model in 20 minutes to save a company tens of millions of dollars of energy consumption per year. The advantages can be huge. That’s already happening.” AI can help plant managers figure out what equipment is best for what task at what time. These are issues that are not easily soloved without computer analysis.


Backdoor Discovered in US Federal Agency Network

Avast's suspicion of network interception and exfiltration is based on its analysis of two files the researchers obtained. The company did not provide ISMG with the origin of the files. One of the files, through which the threat actor initiates the backdoor, is termed as a "downloader" by Avast. It masquerades as a legitimate Windows file named oci[.]dll and abuses the WinDivert, a legitimate packet-capturing utility that can be used to implement user-made packet filters, packet sniffers, firewalls, NAT, VPNs, tunneling applications, etc., without the need to write kernel-mode code. This allows the attacker to listen to all internet communication via the victim's network, they say. "We found this first file disguised as oci.dll ('C:WindowsSystem32oci.dll') - or Oracle Call Interface. It contains a compressed library [called NTlib]. This oci.dll exports only one function 'DllRegisterService.' This function checks the MD5 of the hostname and stops if it doesn’t match the one it stores.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - December 19, 2021

Data Science Collides with Traditional Math in the Golden State

San Francisco’s approach is the model for a new math framework proposed by the California Department of Education that has been adopted for K-12 education statewide. Like the San Francisco model, the state framework seeks to alter the traditional pathway that has guided college-bound students for generations, including by encouraging middle schools to drop Algebra (the decision to implement the recommendations is made by individual school districts). This new framework has been received with some controversy. Yesterday, a group of university professors wrote an open letter on K-12 mathematics, which specifically cites the new California Mathematics Framework. “We fully agree that mathematics education ‘should not be a gatekeeper but a launchpad,’” the professors write. “However, we are deeply concerned about the unintended consequences of recent well-intentioned approaches to reform mathematics, particularly the California Mathematics Framework.” Frameworks like the CMF aim to “reduce achievement gaps by limiting the availability of advanced mathematical courses to middle schoolers and beginning high schoolers,” the professors continued.


Promoting trust in data through multistakeholder data governance

A lack of transparency and openness of the proceedings, or barriers to participation, such as prohibitive membership fees, will impede participation and reduce trust in the process. These challenges are particularly felt by participants from low- and middle-income countries (LICs and LMICs), whose financial resources and technical capacity are usually not on par with those of higher-income countries. These challenges affect both the participatory nature of the process itself and the inclusiveness and quality of the outcome. Even where a level playing field exists, the effectiveness of the process can be limited if decision makers do not incorporate input from other stakeholders. Notwithstanding the challenges, multistakeholder data governance is an essential component of the “trust framework” that strengthens the social contract for data. In practice, this will require supporting the development of diverse forums—formal or informal, digital or analog—to foster engagement on key data governance policies, rules, and standards, and the allocation of funds and technical assistance by governments and nongovernmental actors to support the effective participation of LMICs and underrepresented groups.


A Plan for Developing a Working Data Strategy Scorecard

Strategy is an evolving process, with regular adjustments expected as progress is measured against desired goals over longer timeframes. “There’s always an element of uncertainty about the future,” Levy said, “so strategy is more about a set of options or strategic choices, rather than a fixed plan.” It’s common for companies to re-evaluate and adjust accordingly as business goals evolve and systems or tools change. Before building a strategy, people often assume that they must have vision statements or mission statements, a SWOT analysis, or goals and objectives. These are good to have, he said, but in most instances, they are only available after the strategy analysis is completed. “When people establish their Data Strategies, it’s typically to address limitations they have and the goals that they want. Your strategy, once established, should be able to answer these questions.” But again, Levy said, it’s after the strategy is developed, not prior. Although it can be difficult to understand the purpose of a Data Strategy, he said, it’s critically important to clearly identify goals and know how to communicate them to the intended audience.


“Less popular” JavaScript Design Patterns.

As software engineers, we strive to write maintainable, reusable, and eloquent code that might live forever in large applications. The code we create must solve real problems. We are certainly not trying to create redundant, unnecessary, or “just for fun” code. At the same time, we frequently face problems that already have well-known solutions that have been defined and discussed by the Global community or even by our own teams millions of times. Those solutions to such problems are called “Design patterns”. There are a number of existing design patterns in software design, some of them are used more often, some of them less frequently. Examples of popular JavaScript design patterns include factory, singleton, strategy, decorator, and observer patterns. In this article, we’re not going to cover all of the design patterns in JavaScript. Instead, let’s consider some of the less well-known but potentially useful JS patterns such as command, builder, and special case, as well as real examples from our production experience.


Software Engineering | Coupling and Cohesion

The purpose of Design phase in the Software Development Life Cycle is to produce a solution to a problem given in the SRS(Software Requirement Specification) document. The output of the design phase is Software Design Document (SDD). Basically, design is a two-part iterative process. First part is Conceptual Design that tells the customer what the system will do. Second is Technical Design that allows the system builders to understand the actual hardware and software needed to solve customer’s problem. ... If the dependency between the modules is based on the fact that they communicate by passing only data, then the modules are said to be data coupled. In data coupling, the components are independent of each other and communicate through data. Module communications don’t contain tramp data. Example-customer billing system. In stamp coupling, the complete data structure is passed from one module to another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors- this choice was made by the insightful designer, not a lazy programmer.


5 Takeaways from SmartBear’s State of Software Quality Report

As API adoption and growth continues, standardization (52%) continues to rank as the top challenge organizations hope to solve soon as they look to scale. Without standardization, APIs become bespoke and developer productivity declines. Costs and time-to-market increase to accommodate changes, the general quality of the consumer experience wanes, and it leads to a lower value proposition and decreased reach. Additionally, the consumer persona in the API landscape is rightfully getting more attention. Consumer expectations have never been higher. API consumers demand standardized offerings from providers and will look elsewhere if expectations around developer experience isn’t met, which is especially true in financial services. Security (40%) has thankfully crept up in the rankings to number two this year. APIs increasingly connect our most sensitive data, so ensuring your APIs are secure before, during, and after production is imperative. Applying thoughtful standardization and governance guiderails are required for teams to deliver good quality and secure APIs consistently.


From DeFi year to decade: Is mass adoption here? Experts Answer, Part 1

More scaling solutions will become essential to the mass adoption of DeFi products and services. We are seeing that most DeFi applications go live on multiple chains. While that makes them cheaper to use, it adds more complexities for those who are trying to learn and understand how they work. Thus, to start the second phase of DeFi mass adoption, we need solutions that simplify onboarding and use DApps that are spread across different chains and scaling solutions. The endgame is that all the cross-chain actions will be in the background, handled by infra services such as Biconomy or the DApp themselves, so the user doesn’t need to deal with it themselves. ... Going into 2022 and equipped with the right layer-one networks, we’re aiming for mass adoption. To achieve that, we need to eradicate the entry barriers for buying and selling crypto through regulated fiat bridges (such as banks), overhaul the user experience, reduce fees, and provide the right guide rails so everyone can easily and safely participate in the decentralized economy. DeFi is legitimizing crypto and decentralized economies. Traditional financial institutions are already starting to participate. In 2022, we will only see an uptick in usage and adoption.


Serious Security: OpenSSL fixes “error conflation” bugs – how mixing up mistakes can lead to trouble

The good news is that the OpenSSL 1.1.1m release notes don’t list any CVE-numbered bugs, suggesting that although this update is both desirable and important, you probably don’t need to consider it critical just yet. But those of you who have already moved forwards to OpenSSL 3 – and, like your tax return, it’s ultimately inevitable, and somehow a lot easier if you start sooner – should note that OpenSSL 3.0.1 patches a security risk dubbed CVE-2021-4044. ... In theory, a precisely written application ought not to be dangerously vulnerable to this bug, which is caused by what we referred to in the headline as error conflation, which is really just a fancy way of saying, “We gave you the wrong result.” Simply put, some internal errors in OpenSSL – a genuine but unlikely error, for example, such as running out of memory, or a flaw elsewhere in OpenSSL that provokes an error where there wasn’t one – don’t get reported correctly. Instead of percolating back to your application precisely, these errors get “remapped” as they are passed back up the call chain in OpenSSL, where they ultimately show up as a completely different sort of error.


Digital Asset Management – what is it, and why does my organisation need it?

DAM technology is more than a repository, of course. Picture it as a framework that holds a company’s assets, on top of which sits a powerful AI engine capable of learning the connections between disparate data sets and presenting them to users in ways that make the data more useful and functional. Advanced DAM platforms can scale up to storing more than ten billion objects – all of which become tangible assets, connected by the in-built AI -- at the same time. This has the capacity to result in a huge rise in efficiency around the use of assets and objects. Take, for example, a busy modern media marketing agency. In the digital world, they are faced with a massive expansion of content at the same time as release windows are shrinking – coupled with the issue of increasingly complex content creation and delivery ecosystems. A DAM platform can manage those huge volumes of assets - each with their complex metadata - at speeds and scale that would simply break a legacy system. Another compelling example of DAM in action includes a large U.S.-based film and TV company, which uses it for licencing management.


Impact of Data Quality on Big Data Management

A starting point for measuring Data Quality can be the qualities of big data—volume, velocity, variety, veracity—supplemented with a fifth criterion of value, made up the baseline performance benchmarks. Interestingly, these baseline benchmarks actually contribute to the complexity of big data: variety such as structured, unstructured, or semi-structured increases the possibility of poor data; data channels such as streaming devices with high-volume and high-velocity data enhances the chances of corrupt data—and thus no single quality metric can work on such voluminous and multi-type data. The easy availability of data today is both a boon and a barrier to Enterprise Data Management. On one hand, big data promises advanced analytics with actionable outcomes; on the other hand, data integrity and security are seriously threatened. The Data Quality program is an important step in implementing a practical DG framework as this single factor controls the outcomes of business analytics and decision-making. ... Another primary challenge that big data brings to Data Quality Management is ensuring data accuracy, without which, insights would be inaccurate. 



Quote for the day:

"There is no "one" way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer