Showing posts with label mobile. Show all posts
Showing posts with label mobile. Show all posts

Daily Tech Digest - July 17, 2024

Optimization Techniques For Edge AI

Edge devices often have limited computational power, memory, and storage compared to centralised servers. Due to this, the cloud-centric ML models need to be retargeted so that they fit in the available resource budget. Further, many edge devices run on batteries, making energy efficiency a critical consideration. The hardware diversity in edge devices ranging from microcontrollers to powerful edge servers, each with different capabilities and architectures requires different model refinement and retargeting strategies. ... Many use cases involve the distributed deployment of numerous IoT or edge devices, such as CCTV cameras, working collaboratively towards specific objectives. These applications often have built-in redundancy, making them tolerant to failures, malfunctions, or less accurate inference results from a subset of edge devices. Algorithms can be employed to recover from missing, incorrect, or less accurate inputs by utilising the global information available. This approach allows for the combination of high and low accuracy models to optimise resource costs while maintaining the required global accuracy through the available redundancy.


The Cyber Resilience Act: A New Era for Mobile App Developers

Collaboration is key for mobile app developers to prepare for the CRA. They should first conduct a thorough security audit of their apps, identifying and addressing any vulnerabilities. Then, they’ll want to implement a structured plan to integrate the needed security features, based on the CRA’s checklist. It may also make sense to invest in a partnership with cybersecurity experts who can more efficiently provide more insights and help streamline this process in general. Developers cannot be expected to become top-notch security experts overnight. Working with cybersecurity firms, legal advisors and compliance experts can clarify the CRA and simplify the path to compliance and provide critical insights into best practices, regulatory jargon and tech solutions, ensuring that apps meet CRA standards and maintain innovation. It’s also important to note that keeping comprehensive records of compliance efforts is essential under the CRA. Developers should establish a clear process for documenting security measures, vulnerabilities addressed, and any breaches or other incidents that were identified and remediated. 


Sometimes the cybersecurity tech industry is its own worst enemy

One of the fundamental infosec problems facing most organizations is that strong cybersecurity depends on an army of disconnected tools and technologies. That’s nothing new — we’ve been talking about this for years. But it’s still omnipresent. ... To a large enterprise, “platform” is a code word for vendor lock-in, something organizations tend to avoid. Okay, but let’s say an organization was platform curious. It could also take many months or years for a large organization to migrate from distributed tools to a central platform. Given this, platform vendors need to convince a lot of different people that the effort will be worth it — a tall task with skeptical cybersecurity professionals. ... Fear not, for the security technology industry has another arrow in its quiver — application programming interfaces (APIs). Disparate technologies can interoperate by connecting via their APIs, thus cybersecurity harmony reigns supreme, right? Wrong! In theory, API connectivity sounds good, but it is extremely limited in practice. For it to work well, vendors have to open their APIs to other vendors. 


How to Apply Microservice Architecture to Embedded Systems

In short, the process of deploying and upgrading microservices for an embedded system has a strong dependency on the physical state of the system’s hardware. But there’s another significant constraint as well: data exchange. Data exchange between embedded devices is best implemented using a binary data format. Space and bandwidth capacity are limited in an embedded processor, so text-based formats such as XML and JSON won’t work well. Rather, a binary format such as protocol buffers or a custom binary format is better suited for communication in an MOA scenario in which each microservice in the architecture is hosted on an embedded processor. ... Many traditional distributed applications can operate without each microservice in the application being immediately aware of the overall state of the application. However, knowing the system’s overall state is important for microservices running within an embedded system. ... The important thing to understand is that any embedded system will need a routing mechanism to coordinate traffic and data exchange among the various devices that make up the system.


How to assess a general-purpose AI model’s reliability before it’s deployed

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences. To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. 


The Role of Technology in Modern Product Engineering

Product engineering has seen a significant transformation with the integration of advanced technologies. Tools like Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE) have paved the way for more efficient and precise engineering processes. The early adoption of these technologies has enabled businesses to develop multi-million dollar operations, demonstrating the profound impact of technological advancements in the field. ... Deploying complex software solutions often involves customization and integration challenges. Addressing these challenges requires close client engagement, offering configurable options, and implementing phased customization. ... The future of product engineering is being shaped by technology integration, strategic geographic diversification, and the adoption of advanced methodologies like DevSecOps. As the tech landscape evolves with trends such as AI, Augmented Reality (AR), Virtual Reality (VR), IoT, and sustainable technology, continuous innovation and adaptation are essential.


A New Approach To Multicloud For The AI Era

The evolution from cost-focused to value-driven multicloud strategies marks a significant shift. Investing in multicloud is not just about cost efficiency; it's about creating an infrastructure that advances AI initiatives, spurs innovation and secures a competitive advantage. Unlike single-cloud or hybrid approaches, multicloud offers unparalleled adaptability and resource diversity, which are essential in the AI-driven business environment. Here are a few factors to consider. ... The challenge of multicloud is not simply to utilize a variety of cloud services but to do so in a way that each contributes its best features without compromising the overall efficiency and security of the AI infrastructure. To achieve this, businesses must first identify the unique strengths and offerings of each cloud provider. For instance, one platform might offer superior data analytics tools, another might excel in machine learning performance and a third might provide the most robust security features. The task is to integrate these disparate elements into a seamless whole. 


How Can Organisations Stay Secure In The Face Of Increasingly Powerful AI Attacks

One of the first steps any organisation should take when it comes to staying secure in the face of AI-generated attacks is to acknowledge a significant top-down disparity between the volume and strength of cyberattacks, and the ability of most organisations to handle them. Our latest report shows that just 58% of companies are addressing every security alert. Without the right defences in place, the growing power of AI as a cybersecurity threat could see that number slip even lower. ... Fortunately, there is a solution: low-code security automation. This technology gives security teams the power to automate tedious and manual tasks, allowing them to focus on establishing an advanced threat defence. ... There are other benefits too. These include the ability to scale implementations based on the team’s existing experience and with less reliance on coding skills. And unlike no-code tools that can be useful for smaller organisations that are severely resource-constrained, low-code platforms are more robust and customisable. This can result in easier adaptation to the needs of the business.


Time for reality check on AI in software testing

Given that AI-augmented testing tools are derived from data used to train AI models, IT leaders will also be more responsible for the security and privacy of that data. Compliance with regulations like GDPR is essential, and robust data governance practices should be implemented to mitigate the risk of data breaches or unauthorized access. Algorithmic bias introduced by skewed or unrepresentative training data must also be addressed to mitigate bias within AI-augmented testing as much as possible. But maybe we’re getting ahead of ourselves here. Because even with AI’s continuing evolution, and autonomous testing becomes more commonplace, we will still need human assistance and validation. The interpretation of AI-generated results and the ability to make informed decisions based on those results will remain a responsibility of testers. AI will change software testing for the better. But don’t treat any tool using AI as a straight-up upgrade. They all have different merits within the software development life cycle. 


Overlooked essentials: API security best practices

In my experience, there are six important indicators organizations should focus on to detect and respond to API security threats effectively – shadow APIs, APIs exposed to the internet, APIs handling sensitive data, unauthenticated APIs, APIs with authorization flaws, APIs with improper rate limiting. Let me expand on this further. Shadow APIs: Firstly, it’s important to identify and monitor shadow APIs. These are undocumented or unmanaged APIs that can pose significant security risks. Internet-exposed APIs: Limit and closely track the number of APIs accessible publicly. These are more prone to external threats. APIs handling sensitive data: APIs that process sensitive data and are also publicly accessible are among the most vulnerable. They should be prioritized for security measures. Unauthenticated APIs: An API lacking proper authentication is an open invitation to threats. Always have a catalog of unauthenticated APIs and ensure they are not vulnerable to data leaks. APIs with authorization flaws: Maintain an inventory of APIs with authorization vulnerabilities. These APIs are susceptible to unauthorized access and misuse. Implement a process to fix these vulnerabilities as a priority.



Quote for the day:

"The successful man doesn't use others. Other people use the successful man. For above all the success is of service" -- Mark Kainee

Daily Tech Digest - November 09, 2022

5 ways to use predictive insights to get the most from your data

With the proliferation of SaaS tools, we seem to be collecting so much more data, yet most companies still struggle to integrate it properly to extract insights that would be indicative of future performance. There are a variety of reasons for that: internal data privacy, legacy mindset around who owns what data, lags in data warehousing strategy or operational know-how about the mechanics of integrating it. ... The CMO Survey found that after a decade of integrating customer data across channels, marketers are still struggling, with most giving their organization a 3.5 out of 7 score on the effectiveness of their customer information integration across purchasing, communication and social media channels. ... Too often organizations are overly focused on dashboards and analyzing past trends to determine future actions. Dashboards and reports are often thought of as the final deliverables of data, but this thinking is limiting data’s value. Think about how your acquisition, monetization and retention journeys are orchestrated today, then feed predictive scoring data right into those business systems and tools. 


Coming Clean: Why Cybersecurity Transparency Is A Strength, Not A Weakness

In the wake of the new disclosure proposals, the management of cybersecurity events can no longer be an afterthought in maintaining operating standards. It’s now been elevated to a major concern along with financial risks, such as capital and credit risk. Despite the technical challenges, compliance is generally straightforward. Organizations must develop discipline in how they detect and defend against cyber threats. In addition, they must improve the way they report on them. If they don’t want their next cyber incident to turn into a material event, they need to minimize the risk of a breach in the first place. Remember, the opposite of due diligence is negligence. One way to get started is to focus on the application layer, as that’s where the “money” is. Decades of focus on network-based threats have improved the protection from some cyberattacks, but many business applications remain vulnerable. Applications suffer numerous vulnerabilities outlined by the OWASP Top 10. These are known, common threats that can be countered by using Web application firewalls.


AI eye checks can predict heart disease risk in less than minute, finds study

“This AI tool could let someone know in 60 seconds or less their level of risk,” the lead author of the study, Prof Alicja Rudnicka, told the Guardian. If someone learned their risk was higher than expected, they could be prescribed statins or offered another intervention, she said. Speaking from a health conference in Copenhagen, Rudnicka, a professor of statistical epidemiology at St George’s, University of London, added: “It could end up improving cardiovascular health and save lives.” Circulatory diseases, including cardiovascular disease, coronary heart disease, heart failure and stroke, are major causes of ill health and death worldwide. Cardiovascular disease alone is the most common cause of death globally. It accounts for one in four deaths in the UK alone. While several tests to predict risk exist, they are not always able to accurately identify those who will go on to develop or die of heart disease. Researchers developed a fully automated AI-enabled tool, Quartz, to assess the potential of retinal vasculature imaging – plus known risk factors – to predict vascular health and death.


Mobile Application Security Best Practices

Strong credentials are a must for both web and mobile application development. For mobile apps, you can choose to either have a native login flow, which means the user enters their credentials within the app, or a web-based login flow, where they are directed to a web browser to login. Native login flows provide a better user experience but are generally thought to be less secure. Hypermedia authentication APIs are a solution now popping up to bridge this gap and provide the best of both worlds. Hypermedia authentication APIs interact with the authorization server directly without the need for an intermediary like the browser window. Regardless of how the user enters their credentials, your app should enforce some type of password policy to ensure a strong password is used, and it should not store the access and refresh tokens anywhere except secure storage (like the iOS keychain or Android Keystore). ... Finally, your mobile app should follow best practices for secure coding, just as you would with web applications. Security should be incorporated from the start of the app’s design, with testing occurring throughout the development process.


Cybersecurity threats: what awaits us in 2023?

Businesses will still be mostly concerned with ransomware. The conflict between Russia and Ukraine has marked an end to any possible law enforcement cooperation in the foreseeable future. We can therefore expect that cybercrime groups from either block will feel safe to attack companies from the opposing side. Some may even perceive this as their patriotic duty. The economic downturn will lead more people to poverty, which always translates to increased criminality, and we know ransomware to be extremely profitable. ... Zero trust will take on greater prominence with the continued role of the remote and hybrid workplace. Remote work will continue driving the need for zero trust since hybrid work is now the new normal. With the federal government mandating agencies to adopt zero-trust network policies and design, we expect this to become more common and the private sector to follow suit as 2023 becomes the year of verifying everything. ... In 2023, we might see a slight decline in the raw number of ransomware attacks, reflecting the slowdown of the cryptocurrency markets. 


Google and Renault are creating a 'software-defined vehicle'

Renault will leverage Google's Cloud technology to securely manage data capture and analytics. They'll also use Google's ML and AI capabilities. "Our collaboration with Renault Group has improved comfort, safety, and connectivity on the road," Sundar Pichai, CEO of Google and Alphabet, said in a statement. "Today's announcement will help accelerate Renault Group's digital transformation by bringing together our expertise in the cloud, AI, and Android to provide for a secure, highly-personalized experience that meets customers' evolving expectations." Google shares that some features of the SDV will include predictive maintenance, accurate real-time detection of vehicle failures, a better driving experience, and insurance models reflective of driving behaviors. "Equipped with a shared IT platform, continuous over-the-air updates, and streamlined access to car data, the SDV approach developed in partnership with Google will transform our vehicles to help serve future customers' needs," said Luca de Meo


Why automating finance is just an integration game

What is clear is the increasing demand for decision intelligence with financial analytics at its heart. RPA suppliers are increasingly repositioning themselves as automated intelligence companies, using RPA tools to drive key functions, such as finance. Gartner believes a third of large organisations will be using decision intelligence for structured decision-making to improve competitive advantage in the next two years. Recent research by enterprise application integration firm Jitterbit backs this up. Focusing on mid-sized companies (referred to as Mittelstand) in the DACH region (comprising Germany, Austria and Switzerland), Jitterbit found that 73% of these businesses want to be hyperautomated within three years because “the health of their company depends on it”. The barriers to achieving this are typical – too many manual data process, isolated data silos and a lack of departmental integration. What is becoming clear is that financial analytics can be the core and the catalyst of intelligent automation transformations. 


Detecting Cyber Risks Before They Lead to Downtime

To avoid costly downtime, threats to operational continuity must be detected and investigated as early as possible. That can be accomplished by scanning connected devices for configuration changes and vulnerabilities. However, unlike traditional IT, OT assets cannot be continuously scanned in the same manner and many risks will remain unnoticed. Instead, a system designed for manufacturing environments must have the ability to passively monitor the network infrastructure to locate assets and detect behavior changes and anomalies. That requires understanding dozens of industrial protocols and continuously monitoring the communications and checking against a database of OT/ICS-specific Indicators of Compromise (IOCs, or evidence of a breach) and CVEs. The bane of many monitoring systems is they produce a flood of information about potential harm, not all of it urgent. To be useful, critical alerts must be prioritized based on operational or cybersecurity risk so the right team can respond. For example, OT engineers need to quickly spot undesired process values, incorrect measurements or when a critical device fails so they can resolve issues more quickly.


Challenges to Successful AI Implementation in Healthcare

Incorporating AI systems could improve healthcare efficiency without compromising quality, and this way, patients could receive better and more personalized care. Investigations, assessments, and treatments can be simplified and improved by using AI systems that are smart and efficient. However, implementing AI in healthcare is challenging because it needs to be user-friendly and procure value for patients and healthcare professionals. AI systems are expected to be easy to use and user-friendly, self-instructing, and not require extensive prior knowledge or training. Besides being simple to use, AI systems should also be time-saving and never demand different digital operative systems to function. ... The healthcare experts noted that implementing AI systems in the county council will be difficult due to the healthcare system’s internal capacity for strategic change management. For the promotion of capabilities to work with implementation strategies of AI systems at the regional level, experts highlighted the need for infrastructure and joint ventures with familiar structures and processes. 


AI Ethics: Four Essentials CIOs Must Know

Enterprises must investigate how the data used to train the algorithm is used in order to develop explainable AI. Although this won’t address the bias issue, it will guarantee that firms are aware of the underlying causes of any problems so they can take appropriate action. Synthetic data, in addition to actual data sets, is essential for addressing ethical issues. For instance, synthetic data can be used to correct biases in real data that are unjust and skewed toward particular groups of individuals. Additionally, synthetic data can be used to boost the volume and produce an objective dataset if the volume is inadequate. ... Executives must design AI systems that can instantly identify fabricated data and immoral behavior. This necessitates screening suppliers and partners for the improper use of AI in addition to examining a company’s own AI. Examples include the employment of convincing false text and videos to discredit competitors or the use of AI to carry out sophisticated cyber-attacks. As AI technologies become more accessible, this problem will worsen.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis

Daily Tech Digest - July 07, 2022

Metaverse Standards Forum Makes Data Interoperable But Only For Big Tech

Interoperability is the driving force for the growth and adoption of the open metaverse. Hence, the Metaverse Standards Forum aims to analyze the interoperability necessary for running the metaverse. More than 30 companies took up their respective posts as founding members of the forum. Game developers, architects, and engineers are mere clicks away from building the next cutting-edge metaverse project with artificial intelligence and advanced hardware. Setting interoperability standards with consideration to available technology is crucial to the mass adoption of the metaverse. Similar to the Metaverse Standards Forum, some key players are missing from the Oasis Consortium, like Meta. And in the past, groups like this have become smaller and smaller once internal conflict inevitably arises. The Metaverse Standards Forum is led by the Khronos Group, a nonprofit consortium working on AR/VR, artificial intelligence, machine learning, and more. Khronos has already tried to set a standard for VR APIs with its similarly named VR Standards Initiative in 2016, which included companies like Google, Nvidia. Epic Games and Oculus, which is now part of Meta.


Identity Access Management Is Set for Exploding Growth, Big Changes — Report

As SaaS and cloud subscription services have proliferated in the space, smaller firms increasingly have found IAM within their reach, and this study says to expect this trend to snowball. Whereas the subscription model makes up 60% of the market now, in five years the researchers forecast it will make up 94% of all IAM spending. Meanwhile, other, broader IT trends such as the explosion in cloud computing, bring-your-own-device (BYOD) policies, mobile computing, Internet of Things (IoT), and more geographically dispersed workers are all spurring greater IAM services spending to solve an acute need for saner access control. "There are more devices and services to be managed than ever before, with different requirements for associated access privileges," according to Juniper's analysts. "With so much more to keep track of, as employees migrate through different roles in an organization, it becomes increasingly difficult to manage identity and access." According to Naresh Persaud, managing director in cyber-identity services for Deloitte Risk & Financial Advisory, the market has been especially jumpstarted in the last 12 to 18 months as organizations work to accommodate a broader range and larger scale of remote-work situations.


Working with Microsoft’s .NET Rules Engine

Getting started with the .NET Rule Engine is relatively simple. You will need to first consider how to separate rules from your application and then how to describe them in lambda expressions. There are options for building your own custom rules using public classes that can be referred to from a lambda expression, an approach that gets around the limitations associated with lambda expressions only being able to use methods from .NET’s system namespace. You can find a JSON schema for the rules in the project’s GitHub repository. It’s a comprehensive schema, but in practice, you’re likely to only need a relatively basic structure for your rules. Start by giving your rules workflow a name and then following it up with a nested list of rules. Each rule needs a name, an event that’s raised if it’s successful, an error message and type, and a rule expression that’s defined as a lambda expression. Your rule expression needs to be defined in terms of the inputs to the rules engine. Each input is an object, and the lambda function evaluates the various values associated with the input. 


10 Questions to Ask Yourself Before Starting Your Entrepreneurial Journey

Entrepreneurship is over-glorified and misrepresented on social media. In reality, it is about building a business that solves a problem for a consumer. It's not about driving nice cars or posting nice pictures on social media. In fact, real entrepreneurship looks quite contrary to what we see on social media. Do we require a certain level of luck, genetics and an environment around us to be an entrepreneur? Yes — somewhat, for sure. But also, anyone can solve problems anywhere in the world. That is true for both small problems and big problems. The choice comes in the decision to find people who have needs, wants and issues that you can offer a solution for. It is also a choice that each of us gets to make on how well we wish to solve that issue — how obsessed we are willing to become with that solution and how above and beyond we are willing to go with servicing the customers well. Beyond the business solution also comes the personal and emotional responsibility — shaping and growing ourselves to be able to handle and maneuver through constant stress and difficulties. 


Don’t let automation break change management

Where automation is essential and unavoidable, network teams need to make sure all the good they can do with automation is not done at the expense of or in conflict with one of the other pillars of enterprise IT: change management. They need to make sure automation is controlled by change management, and that they are keeping change management processes in step with their increasing reliance on automation. One aspect is to implement change management on the automation, including the scripts, config files, and playbooks, used to manage the network. The use of code management tools helps with this: check-out and check-in events help staff remember to follow other parts of proper process. Applying change management at this level means describing the intended modifications to the automation, testing them, planning deployment, having a fallback plan to the previous known-good code where that is applicable, and determining specific criteria by which to judge whether the change succeeded or needs to be rolled back.


Imagination is key to effective data loss prevention

SecOps teams are charged with protecting data on a network or endpoint in each of its forms: at rest, in use, and in motion. To be in the driver’s seat and create the appropriate rules or policies to protect data across these three classifications requires teams to understand their environment fully. This is why organizations should consider implementing a flexible, scalable XDR (extended detection and response) architecture that can seamlessly integrate with their current security tools and connect all the dots to eliminate security gaps. With native integrations and connections for security policy orchestration across data and users, endpoints and collaboration, clouds and infrastructure, an XDR architecture provides SecOps teams with maximum visibility and control. ... Knowing what to protect, even before establishing protection, is key. So much so that comprehensive data visibility is a critical tenet for any SecOps team. Achieving this enables security teams to have the flexibility to create data protection parameters tailored to their own specific needs, creating an environment where the only limit on what they can achieve is their imagination.


The importance of digital skills bootcamps to UK tech industry success

The success of digital skills bootcamps in helping to secure the UK tech industry’s future is heavily contingent on the level of involvement from businesses. At present, however, not enough organisations are devoting the time needed to upskill or reskill staff, with research conducted by MPA Group finding that over a third of companies – 35 per cent – only allow workers to devote less than two hours per week to training, research, and development. Although there may be a number of reasons for this, MPA Group’s research indicated that ‘a lack of budget’ was considered by businesses to be the largest barrier for workplaces allowing staff to spend time on development. Digital skills bootcamps are helping to solve this problem by enabling companies to take advantage of the considerable state investment in the initiative, meaning organisations are given more affordable access to industry-led training. What’s more, with bootcamps having already been trialled to great success in places like the West Midlands – where approximately 2,000 adults have been trained with essential tech skills over the past few years – firms have the opportunity to hire recent programme graduates who can help impart what they have learned onto their workers.


The Parity Problem: Ensuring Mobile Apps Are Secure Across Platforms

So to build a robust defense, mobile developers need to implement a multi-layered defense that is both ‘broad’ and ‘deep’. By broad, I'm talking about multiple security features from different protection categories, which complement each other, such as encryption + obfuscation. By ‘deep’, I mean that each security feature should have multiple methods of detection or protection. For example, a jailbreak-detection SDK that only performs its checks when the app launches won’t be very effective because attackers can easily bypass the protection. Or consider anti-debugging, which is an important runtime defense to prevent attackers from using debuggers to perform dynamic analysis – where they run the app in a controlled environment for purposes of understanding or modifying the app’s behavior. There are many different types of debuggers – some based on LLDB – for native code like C++ or objective C, others that inspect at the Java or Kotlin layer, and a lot more. Every debugger works a little bit differently in terms of how it attaches to and analyzes the app.


4 ways CIOs can create resilient organizations

As CIO, you need to make sure your technology investments enable change. After all, you might need to support an entirely remote employee population. You might need to offer new capabilities that attract top talent or quickly shut down business in a region wracked by geopolitical conflict. Organizations invest large sums in migrating to the cloud. One reason is the ability to grow with needs. But technology scale is no longer the primary benefit of the cloud. And scale is no longer a guarantee of resilience. Rather, focus your cloud and software-as-a-service (SaaS) investments on supporting rapid change. Multi-cloud strategy, containerization, agile DevSecOps development methodologies: All should be designed around elasticity that equips you to make quick wins or pivot to new business models. ... Data analytics can provide holistic views and predictive models that help CIOs and others understand emerging trends. Those insights support data-driven decision-making and ultimately, resilience. That’s because you no longer have to rely on gut feel to prepare for an otherwise unpredictable future. 


What happens when there’s not enough cloud?

Most companies struggle to find enough customers to buy their products. According to Selipsky in a Mad Money interview, cloud companies like AWS might have the opposite problem. “IT is going to move to the cloud. And it’s going to take a while. You’ve seen maybe only, call it 10% of IT today move. So it’s still day 1. It’s still early. … Most of it’s still yet to come.” Years ago I noted that the cloud will take time. Not because there’s limited demand, but precisely because even with enterprises on a full sprint to the cloud, there are trillions of dollars’ worth of IT to modernize. As MongoDB CMO Peder Ulander responded to McLaughlin, “If anything, the growing shortage of capacity is a watershed moment for AWS, Google Cloud, and Microsoft Azure.” (Disclosure: I work for MongoDB.) In a hot market, it’s standard for demand to outstrip supply. Ulander cites products as diverse as Teslas or Tickle Me Elmo toys. What’s interesting here is that we’re having the enterprise equivalent of a 1996 Tickle Me Elmo shortage. 



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - February 28, 2022

Follow your S curve

By the time Rogers’s seminal Diffusion of Innovations was published in 1962, the rural sociologist was convinced that the S curve of innovation diffusion depicted “a kind of universal process of social change.” Indeed, S curves have been used in many arenas since then, and Rogers’s book is among the most cited in the social sciences, according to Google Scholar. Johnson’s S Curve of Learning follows this well-established path. There’s the slow advancement toward a “launch point,” during which you canvas the (hopefully) myriad opportunities for career growth available to you and pick a promising one. Then there’s the fast growth once you hit the “sweet spot,” as you build momentum, forging and inhabiting the new you. And, finally, there is “mastery,” the stage in which you might cruise for a while, reaping the rewards of your efforts, before you start looking for something new, starting the cycle all over again. Johnson lays out six different roles that you must play as you travel along her learning curve. In the launch phase, where I spent what felt like an eternity, you first act as an Explorer, who searches for and picks a destination.


Automation: 5 issues for IT teams to watch in 2022

IT automation rarely involves IT alone. Virtually any initiative beyond the experimentation or proof-of-concept phase will involve at least two – and likely several – areas of the business. The more ambitious the goals, the truer this becomes. Good luck to the IT leaders that tackle “improve customer satisfaction ratings by X” or “reduce call wait times by Y” without involving marketing, customer service/customer experience, and other teams, for example. In fact, automation initiatives are best served by aligning various stakeholders from the very start – before specific goals (and metrics for evaluating progress toward those goals) are set. “It’s really important to identify the key benefits you wish to achieve and get all stakeholders on the same page,” says Mike Mason, global head of technology at Thoughtworks. This entails more than just rubber-stamping your way to a consensus that automation will be beneficial to the business. Stakeholders need to align on why they want to automate certain processes or workflows, what the impacts (including potential downsides) will be, and what success actually looks like. Presuming alignment on any of these issues can put the whole project at risk.


Daxin: Stealthy Backdoor Designed for Attacks Against Hardened Networks

Daxin is a backdoor that allows the attacker to perform various operations on the infected computer such as reading and writing arbitrary files. The attacker can also start arbitrary processes and interact with them. While the set of operations recognized by Daxin is quite narrow, its real value to attackers lies in its stealth and communications capabilities. Daxin is capable of communicating by hijacking legitimate TCP/IP connections. In order to do so, it monitors all incoming TCP traffic for certain patterns. Whenever any of these patterns are detected, Daxin disconnects the legitimate recipient and takes over the connection. It then performs a custom key exchange with the remote peer, where two sides follow complementary steps. The malware can be both the initiator and the target of a key exchange. A successful key exchange opens an encrypted communication channel for receiving commands and sending responses. Daxin’s use of hijacked TCP connections affords a high degree of stealth to its communications and helps to establish connectivity on networks with strict firewall rules.


Leveraging mobile networks to threaten national security

Once threat actors have access to mobile telecoms environments, the threat landscape is such that several orders of magnitude of leverage are possible in the execution of cyberattacks. An ability to variously infiltrate, manipulate and emulate the operations of communications service providers and trusted brands – abusing the trust of countless people using their services every day – derives of threat actors’ capability to weaponize ‘trust’ built into the design itself of protocols, systems, and processes exchanging traffic between service providers globally. The primary point of leverage derives of the sustained capacity of threat actors over time to acquire data of targeting value including personally identifiable information for public and private citizens alike. While such information can be gained through cyberattacks directed to that end on the data-rich network environments of mobile operators themselves, the incidence of data breaches of major data holders across industries today is such that it is increasingly possible to simply purchase massive amounts of such data from other threat actors


A Security Technique To Fool Would-Be Cyber Attackers 

Researchers demonstrate a method that safeguards a computer program’s secret information while enabling faster computation. Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets could be stolen by a malicious program through a “memory timing side-channel attack.” This malicious program notices delays when it tries to access a computer’s memory, because the hardware is shared among all programs using the machine. It can then interpret those delays to obtain another program’s secrets, like a password or cryptographic key. One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this dramatically slows down computation. Instead, a team of MIT researchers has devised a new approach that allows memory sharing to continue while providing strong security against this type of side-channel attack. Their method is able to speed up programs by 12 percent when compared to state-of-the-art security schemes.


Is API Security the New Cloud Security?

While organizations previously used APIs more sparingly, predominantly for mobile apps or some B2B traffic, “now pretty much everything is powered by an API,” Klimek said. “So of course, all of these new APIs introduce a lot of security risks, and that’s why a lot of CISOs are now paying attention.” Imperva, which Gartner named a “leader” in its web application and API protection (WAAP) Magic Quadrant, lumps API security risks into two categories, according to Klimek. The first one, technical vulnerabilities, includes a bunch of risks that can also exist in standard web applications such as the OWASP Top 10 application security risks and CVE vulnerabilities. The recent Log4j vulnerability falls into this bucket — and demonstrates how far-reaching these types of security flaws can be. Most Imperva customers tackle these API threats first, “because they tend to be some of the most acute and they require just adopting their existing application security strategies,” such as code scanning during the development process and deploying web application firewalls or runtime application self-protection technology, Klimek explained.


Inside the blockchain developers’ mind: Building a free-to-use social DApp

While we still have a pretty good user experience, telling people they have to spend money before they can use an app is a barrier to entry and winds up feeling a whole lot like a fee. I would know, this is exactly what happened on our previous blockchain, Steem. To solve that problem, we added a feature called “delegation” which would allow people with tokens (e.g. developers) to delegate their mana (called Steem Power) to their users. This way, end-users could use Steem-based applications even if they didn’t have any of the native token STEEM. But, that design was very tailored to Steem, which did not have smart contracts and required users to first buy accounts. The biggest problem with delegations is that there was no way to control what a user did with that delegation. Developers want people to be able to use their DApps for free so that they can maximize growth and generate revenue in some other way like a subscription or through in-game item sales. They don’t want people taking their delegation to trade in decentralized finance (DeFi) or using it to play some other developer’s great game like Splinterlands.


Data governance at the speed of business

Once the data governance organization has been built and its initial policies defined, you can begin to build the muscles that will make data governance a source of nimbleness that will help you anticipate issues, seize opportunities, and pivot quickly as the business environment changes and new sources of data become available. Your data governance capability is responsible for identifying, classifying, and integrating these new and changing data sources, which may come in through milestone events such as mergers or via the deployment of new technologies within your organization. It does so by defining and applying a repeatable set of policies, processes, and supporting tools, the application of which you can think of as a gated process, a sequence of checkpoints new data must pass through to ensure its quality. The first step of the process is to determine what needs to be done to introduce the new data harmoniously. Take, for example, one of our B2B software clients that acquired a complementary company and sought to consolidate the firm’s customer data. 


Irish data watchdog calls for ‘objective metrics’ for big tech regulation

Dixon said that “in some respects at least”, the DPC needs to do better and that it would be beneficial for regulators to have a “shared understanding” of what measures they are tracking. “In the absence of an agreed set of measures to determine achievements or deficiencies, the standing of the GDPR’s enforcement regime in overall terms is at risk of damage,” she said. Dixon said that this was particularly the case “when certain types of allegations” levelled against the Irish DPC “serve only to obscure the true nature and extent of the challenges” presented by the EU regulatory framework – which requires member states to legislate for the enforcement of data protection across the EU. ... That has created a vacuum and “a narrative has emerged in which the number of cases, the quantity and size of the administrative fines levied, are treated as the sole measure of success, informed by the effectiveness of financial penalties” at driving changes in behaviour.


Digital transformation: 3 roadblocks and how to overcome them

Many sectors, such as healthcare and financial services, operate within a complex web of constantly changing regulations that can be difficult to navigate. These regulations, while robust, are critical for sensitive data such as patient information in healthcare, proper execution of protocol in law enforcement, and other essential data that must be managed and used responsibly. How customer and internal data is collected, stored, managed, and used must be prioritized, especially when an enterprise transitions from legacy systems. Establishing a digital system that supports compliance with regulations is a challenge, but once the system is established, every interaction within the organization becomes data that can be monitored if you have the tools to interpret it. Knowing what is going on in every corner of an organization is central to remaining compliant, and setting up intelligent tools that can detect risk across the enterprise will ensure that your organization’s digital transformation is rooted in compliance-first strategies.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them. "-- Warren G. Bennis

Daily Tech Digest - August 24, 2021

The CISO in 2021: coping with the not-so-calm after the storm

Naturally, the challenges facing the modern CISO are not focused on one front. Those on the receiving end of cyber attacks are of just as much concern as those behind them. More than half believe that users are the most significant risk facing their organisation. And just like the threats from the outside, there are several causing concern from within. Human error, criminal insider attacks and employees falling victim to phishing emails are just some of the issues keeping CISOs up at night. With many users now out of sight, working remotely, at least some of the time, these concerns are more pressing than they may once have been. Nearly half of UK CISOs believe that remote working increases the risk facing their organisation. And it’s easy to see why. Non-corporate environments tend to make us more prone to errors and misjudgement, and in turn, more vulnerable to cyber attack. Working from home also calls for slight alterations to security best practice. The use of personal networks and devices may require increased protocols and protections.


How do I select an automated red teaming solution for my business?

There are, however, tools that can help train defenders or aid in discovering gaps in defensive investment. There are three initial considerations for these tools. For the best defenders, identifying behavior, not static signatures or tools, is crucial. By correlating events and telemetry, they can spot new / unknown tools and react faster. To create this, the simulation tool must run complex chains of techniques based on the environment; checking the OS, downloading an implant, executing persistence, then searching local files before moving laterally, as an example. Secondly, the solution’s techniques must be relevant, basing them on updated imitations of those observed from real actors. Use of threat intelligence will benchmark against genuine attackers instead of generic outdated threats, decreasing the likelihood of defensive gaps. Finally, being able to get metrics on the performance of the current defensive set-ups it requires the solution to integrate with the SIEM. Without this, the ability to gain evidence on MITRE mapped control failing becomes cumbersome and error prone.


What Enterprises Can Learn from Digital Disruption

Operating in today's climate means updating mindsets, processes, budgeting cycles, incentive systems and traditional ways of working. It's not about ping pong tables and arcade rooms. It's being better at delivering on core competencies than competitors and having the digital savviness required to succeed in a digital-first world. However, the most valuable trait is curiosity because curiosity leads to experimentation, innovation, optimization, and learning. “Disruptors face the challenge of explaining the concept and the benefits of the new approach. Many organizations struggle to grasp it and operate under the inertia of business as usual,” says Greg Brady, founder and chairman of supply chain control tower provider One Network Enterprises. “The COVID-19 pandemic has opened the eyes of many executives to the shortcomings of the old way of doing business.” Some organizations attempt to mimic what the digital disrupters do. However, their success tends to depend on the context in which the concept was executed.


Break the Cycle of Yesterday's Logic in Organizational Change and Agile Adoption

Like Tibetan-prayer-wheels, each framework promises to be the best business changer if one follows their special consultancy. Affected by the marketing machinery, executives and senior managers pick one of them. Hoping it will suit them instead of looking to their inner and outer organizational opportunities and boundaries to find real value adding outcomes for their business. These artificial dual operating systems get designed alongside the line organisations with their job descriptions, hierarchies, performance contracts, engineering models and cultural values. Hurdles are preprogramed because for many technical driven enterprises, industrial standards simply don’t scale with agile frameworks. A logical inference is that the necessary variety is very much lost. Operationalization of variety with minimal investment costs are entrapped. Consequently, the change system behavior will be like dandelion seeds - the change will take time, costs will spread, and development transaction costs will increase.


How to choose the best NVMe storage array

NVMe’s parallelism is fundamental to its value. Where SAS-based storage supports a single message queue and 256 simultaneous commands per queue, NVMe ramps this all the way up to 64,000 queues, each with support for 64,000 simultaneous commands. That massive increase is key to enabling you to ramp up the number of VMs on a single physical host, driving greater efficiency and easing management. Identifying individual workloads and planning for growth over time--along with high availability needs and continuity requirements (backup/restore, replication, geo-redundancy, or simply disaster recovery)--can help paint a picture of what you need in an NVMe array. While each of these considerations has the potential to drive up the initial cost of whichever NVMe array you select (or multiple arrays, when you consider redundancy), smart investments that match your needs ultimately reduce your cost of ownership in the long run. NVMe arrays are big-ticket items, so efficient storage practices are critical to making the most of the hardware you buy and extending the lifecycle of your storage media.


Progressive Delivery: A Detailed Overview

In a traditional waterfall model, teams release new features to an entire user base at one time. Using progressive delivery, you roll out features gradually. Here’s how it works: DevOps managers first ship a new feature to release managers for internal testing. Once that’s done, the feature goes to a small batch of users to collect additional feedback, or is incrementally released to more users over time. The final step is a general launch when the feature is ready for the masses. It’s a bit like dipping your toes into the water before diving in. If something goes wrong during a launch, you haven’t exposed your entire user base to it. You can easily roll the feature back if you need to and make changes. Progressive delivery emerged in response to widespread dissatisfaction with the continuous delivery model. DevOps teams needed a way to control software releases and catch issues early on instead of pumping out bug-filled versions to their users, and progressive delivery met this requirement.


Employees Can Be Insider Threats to Cybersecurity. Here's How to Protect Your Organization.

Politics are another strong motivation for employees to become insider threats. For example, an employee might be upset with his or her work situation or job title but can't see a way to fix it because of inter-office politics. This could lead to that employee becoming disgruntled and wanting to take revenge on the company. This situation is common in enterprise-level organizations, where management doesn't take the time to get to know their employees or address their concerns. Providing an environment where employees can reach their full potential and have open lines of communication with their chain of command can help mitigate potential political concerns. This ties closely to professional reasons. For example, employees might feel slighted after being passed over for a promotion, or they might be the target of an internal investigation for misconduct. On the other hand, they could find themselves the target of misconduct by a peer or boss, which could lead them to take matters into their own hands. Humans are emotional creatures, and this, of course, applies to employees as well. 


Three reasons why ransomware recovery requires packet data

SecOps team members or external consultants can comb through the data to find the original malware that caused the attack, determine how it got onto the network in the first place, map how it traversed the network and determine which systems and data were exposed. Note that the storage capacity required to store even a week’s worth of packet data can quickly become prohibitively expensive for high-speed networks. To have a realistic chance of storing a large enough buffer, these organizations will need to be smart about where to capture and how much to capture. One way to do this is to use intelligent packet filtering and deduplication by front-ending the packet capture devices with a packet broker to reduce the amount of data saved. Another method is using integrations between the security tools and the capture devices to only capture packet data correlated with incidents or high alerts. Using a rolling buffer strategy to overwrite the data after a “safe period” has passed will also reduce storage requirements. 


The key to mobile security? Be smarter than your device

What people often forget is that the shiny all-singing, all-dancing device in their pocket is also a highly capable surveillance device, boasting advanced sensory equipment (camera and microphone), and a wealth of tracking information. People just assume that their mobile device is secure and often use it with less care (from a security point of view) for things that they wouldn’t do on a laptop. To this end, we now have a vast industry that sets out to secure and empower productivity on the basis that people can work anywhere and often use their devices for both work and personal use. Mobility and cloud technology have become essential with most people now working and managing their personal lives in a digital fashion. To coin a saying from the world of Spiderman (slightly out of context) — with great power comes great responsibility. We now live in a world where the once humble communication device is now a very powerful tool that needs to be used responsibly in the face of those wishing to act in a nefarious way. 


How to Develop a Data-Literate Workforce

You probably already know the importance of data literacy, but to frame this article, let's position the benefits in a modern data governance setting. The best way to do so is to use an example where the absence of data literacy led to disastrous consequences. There are many well-known examples of data literacy issues leading to extreme failures. However, one of the most significant occurred at NASA in 1999 and led to the loss of a $125 million Mars probe. The probe burnt up as it descended through the Martian atmosphere because of a mathematical error caused by conflicting definitions. The navigation team at NASA's Jet Propulsion Laboratory (JPL) worked in the metric system (meters and millimeters), while Lockheed Martin Astronautics, the company responsible for designing and building the probe, provided the navigation team with acceleration data in imperial measurements (feet, pounds, and inches). Because there were no common terms or definitions in place, the JPL team read the data inaccurately and failed to quantify the speed at which the craft was accelerating. The result was catastrophic, but it could have been easily avoided if a system of data literacy had been in place.



Quote for the day:

"The first key to leadership is self-control" -- Jack Weatherford

Daily Tech Digest - April 05, 2021

Encrypted method that measures encounters could slow down or prevent future pandemics

Current approaches for mitigating the spread of infectious disease in a population include exposure notification systems, also known as contact tracing, that rely on the pseudonyms. These systems are currently used on smartphones as a way to digitally track if a person comes into contact with someone who has contracted COVID-19. This can help health officials mitigate the spread of the disease by isolating individuals at risk of infecting others. But the benefit of this method that uses encounter IDs is its promotion of privacy. By labeling each encounter with a random number and not linking the encounter to the device the person is carrying, this makes it much harder for a cyber attacker to obtain that user’s identity. The target audience for this approach would be for a smaller population in a controlled setting like NIST‘s campus or nursing homes, said researcher Angela Robinson, also an author of the new paper. “We are advancing a different approach to contact tracing using encounter metrics.” Gathering these measurements of how individuals interact with one another can help with better understanding ways of modifying working environments, such as altering building layouts and establishing mobility rules, so as to slow the spread of disease.


Blockchain and taking the politics out of tech

One of the biggest problems and challenges in the world of crypto is how do you make sure that people who are transacting in crypto are not sending money to terrorists or not using crypto to engage in money laundering. And it’s a problem because the whole promise of crypto is to allow people to transact peer to peer without the need for a bank limit, right? So normally if you’re writing a check, it goes to the banking system and the bank looks to see who the payee is and figure out if they’re on some list or if you’re using cash there are these currency transaction reports you have to fill out. ... Blockchain identity verification is making probabilistic judgments based on a large amount of data. So, it may not know for sure that you’re not Vladimir Putin. But what it does know is that you’re a person who bought a latte at a Starbucks in Palo Alto yesterday of that you’re a person who has a Netflix subscription you’ve been paying on for 23 months And so when we make these probabilistic judgments, we can reduce to a statistical low rate the likelihood that you’re engaged in some kind of malfeasance.


Data lineage: What it is and why it’s important

Data lineage is comprised of methodologies and tools that expose data’s life cycle and help answer questions around who, when, where, why, and how data changes. It’s a discipline within metadata management and is often a featured capability of data catalogs that allow data consumers to understand the context of data they are utilizing for decision-making and other business purposes. One way to explain data lineage is that it’s the GPS of data that provides “turn-by-turn directions and a visual overview of the completely mapped route.” Others view data lineage as a core datagovops practice, where data lineage, testing, and sandboxes are data governance’s technical practices and automation opportunities. Capturing and understanding data lineage is important for several reasons: Compliance requirements: Many organizations must implement data lineage to stay on the good side of government regulators. Data lineage in risk management and reporting is required for capital market trading firms to support BCBS 239 and MiFID II regulations. For large banks, automating extracting lineage from source systems can save significant IT time and reduce risks. In pharmaceutical clinical trials, the ADaM standard requires traceability between analysis and source data.


7 Ways to Reduce Cyber Threats From Remote Workers

This hybrid work model comes with advantages and disadvantages — and among the disadvantages is a sharp rise in the number of cyber threats and vulnerabilities. When employees connect to organizational servers, databases, and intranets via the Internet, they are really working at a remote endpoint of the corporate office. But unlike in office-based environments, they are not as diligently protected. Therefore, CISOs need to view home-based devices as integral parts of IT and mandate that the devices, as well as the people using them, undergo the same level of security as they would when operating from the office. Like any other maturity improvement program, organizations must grapple with the challenges posed by their people (employees, third-party vendors, and so on), processes, and technology and implement the necessary security measures to protect them. ... To avoid breaches, employers need to implement employee training courses with a focus on the latest threat scenarios. Management, operations, and R&D are all prime targets of social engineering, phishing, and scamming campaigns (among other threats). 


How To Remove Ransomware From Android Phone Easily?

First, you will need to restart your phone in safe mode. Different Android phones have different ways in which this method takes place. Find out how to do it on your device. Once you have the right method, your screen will show that your phone is starting in safe mode. When your device is in safe mode, third-party apps are not running. This may or may not include the malware depending on how it is developed. Once your phone is running on safe mode, you can now check your installed apps. You can do this by going to Settings then to Apps. On the list of apps installed in your phone, look for apps that you don’t remember installing. When you find an app that looks suspicious, uninstall it from your phone. Depending on how you use your phone, you may have a long list of apps to go through. Make sure to get all the apps in the device and remove all that are suspicious or don’t use as often. After you are through with the uninstallation process, head to your phone security settings. Here, look for apps under the device administrators section. If you find any apps that are suspicious in this section, deny them the rights to be administrators on your phone and also uninstall them. They may have let the malware in.


The wholesale financial services firm of the future cannot survive without AI

Compliance is the first major front. Regulatory changes come into effect over the course of the next year which require forensic oversight of large amounts of documentation: a task that is too slow, error-prone and expensive to be completed manually. LIBOR, Basel IV and Dodd-Frank QFC recordkeeping requirements place more and more demands on financial services companies and many simply aren’t adequately prepared. ... The second area is market risk. The volatility of markets in the past year means that transparent oversight is critical. This is where AI comes into its own. AI technology can automate the processing and analysis of the documentation which underpins much of the financial system, from loan agreements to insurance policies. This means that work which would previously have entailed long hours can be accelerated, allowing for vastly improved efficiency and speed and, critically, much better oversight of the compliance requirements which regulators mandate. AI gives institutions the ability to remain vigilant and to keep abreast of risks with much more efficiency than ever before. With market conditions likely to remain volatile throughout 2021, fast, responsive and data-backed decisions aren’t only essential for each institution, they are critical for the health of the financial system as a whole.


Fake Unemployment Benefit Websites Preying On Laid-Off Workers, Experts Warn

You may want to take several additional steps to avoid these and other scams, says Sadler, whose company uses artificial intelligence to detect patterns in legitimate and potentially fraudulent emails and to automatically block potential threats. Besides considering an email security system at home or work, Sadler said, “It’s important for people to employ two-factor authentication and to not use the same password across different sites — those are two of the best steps you can take” for better online security. He also suggests getting a password manager, such as RoboForm, 1Password, Keeper, Norton, or a similar tool that can generate your passwords, distribute them across multiple sites, and protect them with encryption software to guard against hackers. Don’t automatically trust an email asking for private information even if the email address looks legitimate, he added. “People may be trained to look out for [bizarre requests],” Sadler said, “and they may be on alert if the email address is unfamiliar. But sometimes the email account itself is compromised, and the phishing email is using a falsified IP address... If you're unsure, you can verify the legitimacy of the sender by calling the organization directly.”


AI at Your Service

From a CX and EX optimization perspective, the point of an AI system is to increase automation efficiencies. If AI can resolve an issue while communicating in a humanlike manner, operations have been optimized effectively and that particular issue doesn’t need to be escalated to a live person and tap into limited resources. ... This also empowers the employees to refocus on more complex, rewarding tasks that require human attention. Let’s look at an example of how AI is utilized in the healthcare industry. A patient comes in with a skin problem. If it’s an anomaly, the doctor may have to do more research, run a series of tests, get a second opinion, etc. Compare that to an AI system, which can look at hundreds and thousands of cases of a similar skin condition and, in a nanosecond, give a diagnosis that’s 90% accurate. That’s a genuine interactive process between a human and an AI system. In addition to reducing costs and freeing up personnel for more business-critical tasks, AI can build brand loyalty for an organization. In Formation.ai’s study, Brand Loyalty 2020: The Need for Hyper-Individualization, 79% of consumers stated that the more personalized tactics a brand uses, the more loyal the customer is to the brand. In fact, 81% of consumers will share basic personal information in exchange for a more personalized customer experience.


What is a streaming database?

Some streaming databases are designed to dramatically reduce the size of the data to save storage costs. They can, say, replace a value collected every second with an average computed over a day. Storing only the average can make long-term tracking economically feasible. Streaming opens up some of the insides of a traditional database. Standard databases also track a stream of events, but they’re usually limited to changes in data records. The sequence of INSERTs, UPDATEs, and DELETEs are normally stored in a hidden journal or ledger inside. In most cases, the developers don’t have direct access to these streams. They’re only offered access to the tables that show the current values. Streaming databases open up this flow and makes it simpler for developers to adjust how the new data is integrated. Developers can adjust how the streams from new data are turned into tabular summaries, ensuring that the right values are computed and saved while the unneeded information is ignored. The opportunity to tune this stage of the data pipeline allows streaming databases to handle markedly larger datasets.


Why Data Democratization Should Be Your Guiding Principle for 2021

Data, and universal access to it, is key for today’s companies to create new opportunities and unlock the value embedded within their organization – all of which can positively impact a company’s top and bottom line. True data democratization pushes organizations to rethink and maybe even restructure, which often means driving a dramatic cultural change in order to realize financial gain. It also means freeing information from the silos created by internal departmental data, customer data, and external data, and turning it into a borderless ecosystem of information. The trouble is many companies aren’t that good at it. Our research last year initially suggested senior decision-makers were confident that they were opening up access to data sufficiently. However, when we scratched a little deeper, we found almost half (46%) of respondents believed that data democratization wasn’t feasible for them. IT infrastructure challenges were cited by almost four out of five respondents as a blocker to democratizing data in their organization. Performance limitations, infrastructure constraints, and bottlenecks are all standing in the way.



Quote for the day:

"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld