Showing posts with label distributed cache. Show all posts
Showing posts with label distributed cache. Show all posts

Daily Tech Digest - November 16, 2024

New framework aims to keep AI safe in US critical infrastructure

According to a release issued by DHS, “this first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators — as well as the civil society and public sector entities that protect and advocate for consumers.” ... Naveen Chhabra, principal analyst with Forrester, said, “while average enterprises may not directly benefit from it, this is going to be an important framework for those that are investing in AI models.” ... Asked why he thinks DHS felt the need to create the framework, Chhabra said that developments in the AI industry are “unique, in the sense that the industry is going back to the government and asking for intervention in ensuring that we, collectively, develop safe and secure AI.” ... David Brauchler, technical director at cybersecurity vendor NCC sees the guidelines as a beginning, pointing out that frameworks like this are just a starting point for organizations, providing them with big picture guidelines, not roadmaps. He described the DHS initiative in an email as “representing another step in the ongoing evolution of AI governance and security that we’ve seen develop over the past two years. It doesn’t revolutionize the discussion, but it aligns many of the concerns associated with AI/ML systems with their relevant stakeholders.”


Building an Augmented-Connected Workforce

An augmented workforce can work faster and more efficiently thanks to seamless access to real-time diagnostics and analytics, as well as live remote assistance, observes Peter Zornio, CTO at Emerson, an automation technology vendor serving critical industries. "An augmented-connected workforce institutionalizes best practices across the enterprise and sustains the value it delivers to operational and business performance regardless of workforce size or travel restrictions," he says in an email interview. An augmented-connected workforce can also help fill some of the gaps many manufacturers currently face, Gaus says. "There are many jobs unfilled because workers aren't attracted to manufacturing, or lack the technological skills needed to fill them," he explains. ... For enterprises that have already invested in advanced digital technologies, the path leading to an augmented-connected workforce is already underway. The next step is ensuring a holistic approach when looking at tangible ways to achieve such a workforce. "Look at the tools your organization is already using -- AI, AR, VR, and so on -- and think about how you can scale them or connect them with your human talent," Gaus says. Yet advanced technologies alone aren't enough to guarantee long-term success.


DORA and why resilience (once again) matters to the board

DORA, though, might be overlooked because of its finance-specific focus. The act has not attracted the attention of NIS2, which sets out cybersecurity standards for 15 critical sectors in the EU economy. And NIS2 came into force in October; CIOs and hard-pressed compliance teams could be forgiven for not focusing on another piece of legislation that is due in the New Year. But ignoring DORA altogether would be short-sighted. Firstly, as Rodrigo Marcos, chair of the EU Council at cybersecurity body CREST points out, DORA is a law, not a framework or best practice guidelines. Failing to comply could lead to penalties. But DORA also covers third-party risks, which includes digital supply chains. The legislation extends to any third party supplying a financial services firm, if the service they supply is critical. This will include IT and communications suppliers, including cloud and software vendors. ... And CIOs are also putting more emphasis on resilience and recovery. In some ways, we have come full circle. Disaster recovery and business continuity were once mainstays of IT operations planning but moved down the list with the move to the cloud. Cyber attacks, and especially ransomware, have pushed both resilience and recovery right back up the agenda.


Data Is Not the New Oil: It’s More Like Uranium

Comparing data to uranium is an accurate analogy. Uranium is radioactive and it is imperative to handle it carefully to avoid radiation exposure, the effects of which are linked to serious health and safety concerns. Issues with the deployment of uranium, such as in reactors, for instance, can lead to radioactive fallouts that are expensive to contain and have long-term health consequences for impacted individuals. The possibility of uranium being stolen poses significant risks and global repercussions. Data exhibits similar characteristics. It is critical for it to be stored safely, and those who experience data theft are forced to deal with long-term consequences – identity theft and financial concerns, for example. An organization experiencing a cyberattack must deal with regulatory oversight and fines. In some cases, losing sensitive data can trigger significant global consequences. ... Maintaining a data chain of custody is paramount. Some companies allow all employees access to all records, which increases the surface area of a cyberattack, and compromised employees could lead to a data breach. Even a single compromised employee computer can lead to a more extensive hack. Consider the case of the nonprofit healthcare network Ascension, which operates 140 hospitals and 40 senior care facilities.


Palo Alto Reports Firewalls Exploited Using an Unknown Flaw

Palo Alto said the flaw is being remotely exploited, has a "critical" severity rating of 9.3 out of 10 on the CVSS scale and that mitigating the vulnerability should be treated with the "highest" urgency. One challenge for users: no patch is yet available to fix the vulnerability. Also, no CVE code has been allocated for tracking it. "As we investigate the threat activity, we are preparing to release fixes and threat prevention signatures as early as possible," Palo Alto said. "At this time, securing access to the management interface is the best recommended action." The company said it doesn't believe its Prisma Access or Cloud NGFW are at risk from these attacks. Cybersecurity researchers confirm that real-world details surrounding the attacks and flaws remain scant. "Rapid7 threat intelligence teams have also been monitoring rumors of a possible zero-day vulnerability, but until now, those rumors have been unsubstantiated," the cybersecurity firm said in a Friday blog post. Palo Alto first warned customers on Nov. 8 that it was investigating reports of a zero-day vulnerability in the management interface for some types of firewalls and urged them to lock down the interfaces. 


Award-winning palm biometrics study promises low-cost authentication

“By harnessing high-resolution mmWave signals to extract detailed palm characteristics,” he continued, “mmPalm presents an ubiquitous, convenient and cost-efficient option to meet the growing needs for secure access in a smart, interconnected world.” The mmPalm method employs mmWave technology, which is widely used in 5G networks, to capture a person’s palm characteristics by sending and analyzing reflected signals and thereby creating a unique palm print for each user. Beyond this, mmPalm also meets the difficulties that can arise in authentication technology like distance and hand orientation. The system uses a type of AI called the Conditional Generative Adversarial Network (cGAN) to learn different palm orientations and distances, and generates virtual profiles to fill in gaps. In addition, the system will adapt to different environments using a transfer learning framework so that mmPalm is suited to various settings. The system also builds virtual antennas to increase the spatial resolution of a commercial mmWave device. Tested with 30 participants over six months, mmPalm displayed a 99 percent accuracy rate and was resistant to impersonation, spoofing and other potential breaches.


Scaling From Simple to Complex Cache: Challenges and Solutions

To scale a cache effectively, you need to distribute data across multiple nodes through techniques like sharding or partitioning. This improves storage efficiency and ensures that each node only stores a portion of the data. ... A simple cache can often handle node failures through manual intervention or basic failover mechanisms. A larger, more complex cache requires robust fault-tolerance mechanisms. This includes data replication across multiple nodes, so if one node fails, others can take over seamlessly. This also includes more catastrophic failures, which may lead to significant down time as the data is reloaded into memory from the persistent store, a process known as warming up the cache. ... As the cache gets larger, pure caching solutions struggle to provide linear performance in terms of latency while also allowing for the control of infrastructure costs. Many caching products were written to be fast at small scale. Pushing them beyond what they were designed for exposes inefficiencies in underlying internal processes. Potential latency issues may arise as more and more data are cached. As a consequence, cache lookup times can increase as the cache is devoting more resources to managing the increased scale rather than serving traffic.


Understanding the Modern Web and the Privacy Riddle

The main question is users’ willingness to surrender their data and not question the usage of this data. This could be attributed to the effect of the virtual panopticon, where users believe they are cooperating with agencies (government or private) that claim to respect their privacy in exchange for services. The Universal ID project (Aadhar project) in India, for instance, began as a means to provide identity to the poor in order to deliver social services, but has gradually expanded its scope, leading to significant function creep. Originally intended for de-duplication and preventing ‘leakages,’ it later became essential for enabling private businesses, fostering a cashless economy, and tracking digital footprints. ... In the modern web, users occupy multiple roles—as service providers, users, and visitors—while adopting multiple personas. This shift requires greater information disclosure, as users benefit from the web’s capabilities and treat their own data as currency. The unraveling of privacy has become the new norm, where withholding information is no longer an option due to the stigmatization of secrecy. Over the past few years, there has been a significant shift in how consumers and websites view privacy. Users have developed a heightened sensitivity to the use of their personal information and now recognize their basic right to internet privacy.


Databases Are a Top Target for Cybercriminals: How to Combat Them

Most ransomware can encrypt pages within a database—Mailto, Sodinokibi (REvil), and Ragnar Locker—and destroy the database pages. This means the slow, unknown encryption of everything, from sensitive customer records to critical networks resources, including Active Director, DNS, and Exchange, and lifesaving patient health information. Because databases can continue to run even with corrupted pages, it can take longer to realize that they have been attacked. Most often, it is the wreckage of the attack that is usually found when the database is taken down for routine maintenance, and by that time, thousands of records could be gone. Databases are an attractive target for cybercriminals because they offer a wealth of information that can be used or sold on the dark web, potentially leading to further breaches and attacks. Industries such as healthcare, finance, logistics, education, and transportation are particularly vulnerable. The information contained in these databases is highly valuable, as it can be exploited for spamming, phishing, financial fraud, and tax fraud. Additionally, cybercriminals can sell this data for significant sums of money on dark web auctions or marketplaces.


The Impact of Cloud Transformation on IT Infrastructure

With digital transformation accelerating across industries, the IT ecosystem comprises traditional and cloud-native applications. This mixed environment demands a flexible, multi-cloud strategy to accommodate diverse application requirements and operational models. The ability to move workloads between public and private clouds has become essential, allowing companies to dynamically balance performance and cost considerations. We are committed to delivering cloud solutions supporting seamless workload migration and interoperability, empowering businesses to leverage the best of public and private clouds. ... With today’s service offerings and various tools, migrating between on-premises and cloud environments has become straightforward, enabling continuous optimization rather than one-time changes. Cloud-native applications, particularly containerization and microservices, are inherently optimized for public and private cloud setups, allowing for dynamic scaling and efficient resource use. To fully optimize, companies should adopt cloud-native principles, including automation, continuous integration, and orchestration, which streamline performance and resource efficiency. Robust tools like identity and access management (IAM), encryption, and automated security updates address security and reliability, ensuring compliance and data protection.



Quote for the day:

"The elevator to success is out of order. You’ll have to use the stairs…. One step at a time.” -- Rande Wilson

Daily Tech Digest - June 04, 2019

What the Future of Fintech Looks Like

What the Future of Fintech Looks Like
Fintech has been driving huge changes across the financial services sector, but one area that is seeing exponential change is in the ultra-high net-worth individual (UHNWI) space. Crealogix Group, a global market leader in digital banking, has been working with banks across the world on their digital transformation journey for over 20 years, and it is only recently that they are seeing growing momentum in private wealth to digitize. Pascal Wengi, the AsiaPacific managing director of Crealogix, says: “The old ways of servicing these clients through a personal touch is quickly moving to digitally-led platforms, with younger, techsavvy UHNWIs wanting an immediate and comprehensive view of their assets without waiting for a phone call. At the same time, they also want customized solutions catered to their unique financial needs.” Platforms that allow access on both sides—clients, and their advisors, family office teams and accountants..., insists Wengi.



data gravity 1000x630
Data gravity is a metaphor introduced into the IT lexicon by a software engineer named Dave McCrory in a 2010 blog post.1 The idea is that data and applications are attracted to each other, similar to the attraction between objects that is explained by the Law of Gravity. In the current Enterprise Data Analytics context, as datasets grow larger and larger, they become harder and harder to move. So, the data stays put. It’s the gravity — and other things that are attracted to the data, like applications and processing power — that moves to where the data resides. Digital transformation within enterprises — including IT transformation, mobile devices and Internet of things — is creating enormous volumes of data that are all but unmanageable with conventional approaches to analytics. Typically, data analytics platforms and applications live in their own hardware + software stacks, and the data they use resides in direct-attached storage (DAS). Analytics platforms — such as Splunk, Hadoop and TensorFlow — like to own the data. So, data migration becomes a precursor to running analytics.


5 requirements for success with DataOps strategies

For organization who operate at this speed of change, they require modern data architectures that allow for the quick use of the ever-expanding volumes of data. These infrastructures – based on hybrid and multi-cloud for greater efficiency – provide enterprises with the agility they need to compete more effectively, improve customer satisfaction and increase operational efficiencies. When the DataOps methodology is part of these architectures, companies are empowered to support real-time data analytics and collaborative data management approaches while easing the many frustrations associated with access to analytics-ready data. DataOps is a verb not a noun, it is something you do, not something you buy. It is a discipline that involves people, processes and enabling technology. However, as organizations shift to modern analytics and data management platforms in the cloud, you should also take a hard look at your legacy integration technology to make sure that it can support the key DataOps principles that will accelerate time to insight.



An API architect typically performs a high-level project management role within a software development team or organization. Their responsibilities can be extensive and diverse, and a good API architect must combine advanced technical skills with business knowledge and a focus on communication and collaboration. There are often simultaneous API projects, and the API architect must direct the entire portfolio. API architects are planners more than coders. They create and maintain technology roadmaps that align with business needs. For example, an API architect should establish a reference architecture for the organization's service offerings, outlining each one and describing how they work. The architect should define the API's features, as well as its expected security setup, scalability and monetization. The API architect sets best practices, standards and metrics for API use, as well. These guidelines should evolve as mistakes become clear and better options emerge.



Edge-based caching and blockchain-nodes speed up data transmission

Edge-based caching and blockchain-nodes speed up data transmission
Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says. The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache. An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems.


Microsoft's Vision For Decentralized Identity

Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable. But identity data has too often been exposed in breaches, affecting our social, professional, and financial lives. Microsoft believes that there’s a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This whitepaper explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations. Today we use our digital identity at work, at home, and across every app, service, and device we engage with. It’s made up of everything we say, do, and experience in our lives—purchasing tickets for an event, checking into a hotel, or even ordering lunch. 


Your 3-minute guide to serverless success
What has propelled the use of serverless? Faster deployment, the simplification and automation of cloudops (also known as “no ops” and “some ops”), integration with emerging devops processes, and some cost advantages. That said, most people who want to use serverless don’t understand how to do it. Many think that you can take traditional on-premises applications and deem them serverless with the drag of a mouse. The reality is much more complex.  Indeed, serverless application development is more likely a fit for net new applications. Even then you need to consider a few things, mainly that you need to design for serverless. Just as you should design for containers and other execution architectures that are optimized by specific design patterns, serverless is no exception. ... The trick to building and deploying applications on serverless systems is understanding what serverless is and how to take full advantage. We have a tendency to apply all of our application architecture experience to all type of development technologies, and that will lead to inefficient use of the technology, which won’t produce the ROI expected—or worse, negative ROI, which is becoming common.


Author Q&A: Chief Joy Officer

Change is hard. We get used to the way we work and we assume it’s just the way it has to be. Inertia is a big deal. Many of us have tried to make changes in our personal life—our health, our financial situation—only to find out we’re stuck in a rut. We know we need to change our behaviors in order to change our outcomes, but changing human behavior is hard. What probably prevents change more than anything is success. If you’re successful enough, then it’s hard to be convinced of the value of change. You’ll say, well, why should we change when we’re already successful? Of course the problem with success is that it is often fleeting. It’s not like you reach a level of success and then automatically stay there. Every organization, every market, and every business ebbs and flows. When it’s flowing awesomely, we figure we don’t need to change. But when it’s ebbing, we get scared—and sometimes that’s the least opportune time to make a change, because fear can cloud our ability to make the best decisions for our organizations or our teams.


Discover practical serverless architecture use cases


A more complete serverless architecture-based system comes into play with the workloads related to video and picture analysis. In this example, serverless computing enables an as-needed workflow to spin up out of a continuous process, and the event-based trigger pulls in an AI service: Images are captured and analyzed on a standard IaaS environment, with events triggering the use of Amazon Rekognition or a similar service to carry out facial recognition when needed. The New York Times used such an approach to create its facial recognition system that used public cameras around New York's Bryant Park. Software teams can also use serverless designs to aid technical security enforcement. Event logs from any device on a user's platform can create triggers that send a command into a serverless environment. The setup kicks off code to identify the root cause for the logged event or a machine learning- or AI-based analysis of the situation on the device. This information, in turn, can trigger what steps to take to rectify issues and protect the overall systems.


It’s time for the IoT to 'optimize for trust'

The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with SSL. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed. Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns. The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent).



Quote for the day:


"The ability to continuously simplify, while adding more value and removing clutter, is a superpower." -- @ValaAfshar


July 24, 2012

ASP.NET - Password Strength Indicator using jQuery and XML
ASP.NET Password Strength Indicator somewhat similar to AJAX PasswordStrength extender control behavior and implemented by using jQuery and XML.

Flexibility: A Foundation for Responsive Design
If you haven’t been living under a rock for the past year or so, you know that responsive Web design is one of the biggest trends these days. Introduced by Ethan Marcotte, the concept is simple: develop a site using methods that enable it to adapt and respond to different devices and resolutions.

Why does the IT industry continue to listen to Gartner?
Another day, another provocative research report from Gartner, which has a long track record of spectacularly wrong predictions. I've collected some of their greatest hits. Er, misses.

After Infy, TCS, Cognizant in fray to buy Lodestone
TCS and Cognizant have joined Infosys in the race to take over Swiss firm Lodestone Management Consultants, a management and technology consulting firm.

Samsung adopts Windows Azure for Smart TV cloud structures
Samsung announced on Monday its decision to use Windows Azure technology to manage the Smart TV system through cloud-based technology. The company cited a reduction in costs, increased productivity and a flexible, scalable model which can be expanded to meet its growing customer base.

Harley-Davidson deal win spurs Infosys to open new US delivery centre
Outsourcer Infosys has decided to open a delivery centre in Milwaukee, after it won a five-year deal with motorbike maker Harley-Davidson to supply tech services such as applications management

Facebook's Zuckerberg wins privacy patent, 6 years on
The patent, number 8,225,376, was first applied for on July 25, 2006. Zuckerberg and Facebook's former chief privacy officer Chris Kelly are credited as inventors for the patent, which is titled "Dynamically generating a privacy summary."

Microsoft's Lync: Unified Communications Made Easy
Microsoft Lync offers Instant, Messaging Audio/Video Conferencing and Telephony Services, making it the complete unified communications tool SMEs need, says Microsoft's Sukhvinder Ahuja

How to Handle Relational Data in a Distributed Cache
Although distributed caching is great, one challenge it presents is how to cache relational data that has various relationships between data elements. This is because a distributed cache provides you a simple, Hashtable-like (key, value) interface ...

Robert's Rules: The Four Commitments
Here are Robert Thompson's four leadership traits rather commitments.


Quote for the day:

"Out of clutter, find simplicity. From discord find harmony. In the middle of difficulty lies opportunity."  -- Albert Einstein