Showing posts with label industrial revolution. Show all posts
Showing posts with label industrial revolution. Show all posts

Daily Tech Digest - January 21, 2024

What is RAG? More accurate and reliable LLMs

Retrieval-Augmented Generation (RAG) is an AI framework that significantly impacts the field of Natural Language Processing (NLP). It is designed to improve the accuracy and richness of content produced by language models. Here’s a synthesis of the key points regarding RAG from various sources: RAG is a system that retrieves facts from an external knowledge base to provide grounding for large language models (LLMs). This grounding ensures that the information generated by the LLMs is based on accurate and current data, which is particularly important given that LLMs can sometimes produce inconsistent outputs; The framework operates as a hybrid model, integrating both retrieval and generative models. This integration allows RAG to produce text that is not only contextually accurate but also rich in information. The capability of RAG to draw from extensive databases of information enables it to contribute contextually relevant and detailed content to the generative process; RAG addresses a limitation of foundational language models, which are generally trained offline on broad domain corpora and are not updated with new information post-training.


Redefining Quantum Bits: The Graphene Valley Breakthrough

Because quantum information is much more prone to being corrupted – and therefore become unsuitable for computational tasks – by the surrounding environment than its classical counterpart, researchers who study different qubit candidates must characterize their coherence properties: these tell them how well and for how long quantum information can survive in their qubit system. In most traditional quantum dots, electron spin decoherence can be caused by the spin-orbit interaction, which introduces an unwanted coupling between the electron spin and the vibrations of the host lattice, and the hyperfine interaction between the electron spin and the surrounding nuclear spins. In graphene as well as in other carbon-based materials, spin-orbit coupling and hyperfine interaction are both weak: this makes graphene quantum dots especially appealing for spin qubits. The results reported by Garreis, Tong, and co-authors add one more promising facet to the picture. ... The hexagonal symmetry observed in this so-called real space is also present in momentum space, where the vertices of the lattice don’t correspond to the spatial locations of carbon atoms but to values of momentum associated with the free electrons on the lattice.


5 Ways AI Can Make Your Human-To-Human Relationships More Effective

Understanding your audience is a major challenge for many business leaders. After all, if you knew what did or didn’t appeal to your audience, it would be much easier to speak to them in a meaningful, engaging way that sparks lasting connections. And AI can help here, too. This was illustrated to me during a recent conversation with James Webb, co-founder and CTO of Comb Insights, whose app uses proprietary AI to provide sentiment scores on comments on social media posts. "Using AI to quickly evaluate the overall sentiment of the comments on a post can give business leaders an immediate understanding of whether their content resonated with their audience,” he told me in an interview. “Seeing the ratio of positive to neutral or negative comments, and seeing the most common words that show up in the comments, can provide quick insights into why a post succeeded or failed. With this instant understanding of their audience, businesses can pivot in the type of social media content they produce so they can strengthen these important digital relationships.”


The missing link of the AI safety conversation

From a practical standpoint, the high cost of AI development means that companies are more likely to rely on a single model to build their product — but product outages or governance failures can then cause a ripple effect of impact. What happens if the model you’ve built your company on no longer exists or has been degraded? Thankfully, OpenAI continues to exist today, but consider how many companies would be out of luck if OpenAI lost its employees and could no longer maintain its stack. Another risk is relying heavily on systems that are randomly probabilistic. We are not used to this and the world we live in so far has been engineered and designed to function with a definitive answer. Even if OpenAI continues to thrive, their models are fluid in terms of output, and they constantly tweak them, which means the code you have written to support these and the results your customers are relying on can change without your knowledge or control. Centralization also creates safety issues. These companies are working in the best interest of themselves. If there is a safety or risk concern with a model, you have much less control over fixing that issue or less access to alternatives.


Intro to Digital Fingerprints

Digital fingerprinting is a technique used to identify users across different websites based on their unique device and browser characteristics. These characteristics - fingerprint parameters, can include various software, hardware (CPU, RAM, GPU, media devices - cameras, mics, speakers), location, time zone, IP, screen size/resolution, browser/OS languages, network, internet provider-related and other attributes. The combination of these parameters creates a unique identifier - fingerprint, that can be used to track a user's online activity. Fingerprints play a crucial role in online security, enabling services to identify and authenticate unique users. They also make it possible for users to trick such systems to stay anonymous online. However, if you can manipulate your fingerprints, you can run tens or hundreds or more different accounts to pretend that they are unique, authentic users. While this may sound cool, it has serious implications as it can make it possible to create an army of bots that can spread spam and fakes all over the internet, potentially resulting in fraudulent actions.


Looking at a data-driven financial future for India

In the intricate landscape of financial services, managing vast data, complex silos, and strict compliance demands a strategic solution. A hybrid data mesh is an innovative approach to financial operations that brings flexibility and coherence. This method combines a distributed architecture with an SSOT, ensuring accurate, secure, and compliant data handling. Data distribution across systems and functions facilitates quick insights while adhering to quality and privacy standards. The hybrid data mesh concept integrates the advantages of a distributed architecture tailored to domain-specific data with the SSOT, providing enhanced flexibility and scalability. This fusion ensures data coherence and accuracy while allowing domain independence, reinforcing security, and streamlining traceable and auditable compliance. Predictive models can be tailored to specific products or customer segments by harnessing AI and ML tools, enhancing decision-making in a dynamic market. This streamlined approach identifies growth opportunities and nurtures a culture of adaptability and innovation.


L&D trends that will define 2024

AI-assisted coding/software development employs AI to help write and review code. The potential of the technology to assist new developers in improving their code and saving time is valuable. The edtech sector, in particular, will employ AI to create customised learning experiences besides using tools that offer instant feedback on code. We could be looking at automating assessments for unbiased, error-free evaluations. Manually identifying personalised learning journeys for numerous individuals is time-consuming and extremely difficult. AI-assisted coding can help solve this operational challenge. Soon, we’ll give users quick, accurate responses and allow them to accelerate their learning journeys. ... Organisations will focus on data-driven, business-aligned learning initiatives for specific job-role competencies. This is to qualify L&D impact by easily tracking employee metrics such as job performance, efficiency, engagement, and employee satisfaction in new ways. When properly implemented, the accumulated data can raise confidence levels among higher-ups and lead to sustained investment in training practices. Organisations also analyse the information to identify areas of positive impact and focus on L&D in those regions for frequently better outcomes. 


New Guidance Urges US Water Sector to Boost Cyber Resilience

"Cyber threat actors are aware of - and deliberately target - single points of failure," the guidance states. "A compromise or failure of a water and wastewater sector organization could cause cascading impacts throughout the sector and other critical infrastructure sectors." The incident response guide aims to provide organizations with best practices for all four stages of the incident response life cycle - from preparation through detection, recovery and post-incident activities. The guidance says "the cyber incident reporting landscape is constantly evolving" and encourages water sector officials to review their reporting obligations and "consider engaging in additional voluntary reporting and/or information sharing" measures. Eric Goldstein, CISA's executive director for cybersecurity, said in a statement announcing the joint guidance that the U.S. water and wastewater sector "is under constant threat from malicious cyber actors." "In the new year, CISA will continue to focus on taking every action possible to support 'target-rich, cyber-poor' entities like WWS utilities by providing actionable resources and encouraging all organizations to report cyber incidents," he said.


Banking at the Precipice: Navigating the Fifth Industrial Revolution

As retail banking stands amid the Fourth Industrial Revolution’s digital transformation, leaders now must prepare for an imminent Fifth Industrial Revolution poised to profoundly reshape markets and experiences. Defined by extreme personalization, mass customization and precision augmentation, the emerging revolution’s exact disruptions remain somewhat undefined. Yet advancements in generative artificial intelligence, ambient interfaces and hyper-connectivity hint at consumer-in-command days ahead. ... Most of these Fifth Industrial Revolution financial applications seem unimaginable today. Imagine augmented live views layering physical surfaces like a retail store, billboard or car dealership with tailored offers based on persona identification and real-time transactional and behavioral data. Moving further, imagine a ‘digital twin agent’ seamlessly negotiating a personalized deal or pre-approved financing instantly. In this world, augmented and mixed reality interfaces, bridging physical and virtual worlds, will be able to move money experiences from transactions to value-based propositions based on where your eyes focus and engagements you have had in the past.


How generative AI is changing entrepreneurship

Entrepreneurs are expected to do a wide range of time-consuming tasks, from writing emails and answering phone calls to orchestrating product demonstrations and coding a website. “AI does all of those things well,” Mollick said. “It lets you focus more on what your top skill is, and it kind of handles everything else.” Generative AI can also serve as a guide. “A third of Americans have a business idea that they haven’t acted on because they don’t know what to do next,” Mollick said. “The AI can tell you what to do next, help you write the emails, [and] help you build the product.” Mollick noted that users should be aware of the benefits and limitations of the technology. “It’s kind of like an intern who wants to make you happy and therefore lies a lot and is kind of naive [and] never admits that they made a mistake,” he said. “Once you think about [AI] that way, you end up in much better shape.” Generative AI is a new general-purpose technology — one that comes around once in a generation and touches just about everything humans do, Mollick said, like electricity, computers, and the internet have. For entrepreneurs, generative AI can assist with researching ideas, coming up with logos and names, creating a website, and more, Mollick said.



Quote for the day:

"Leadership is not about titles, positions, or flow charts. It is about one life influencing another." -- John C. Maxwell

Daily Tech Digest - May 05, 2023

Data is choking AI. Here’s how to break free.

As enterprises deepen their embrace of AI and other data-driven, high-performance computing, it’s critical to ensure that performance and value are not starved by underperforming processing, storage and networking. Here are key considerations to keep in mind. Compute. When developing and deploying AI, it’s crucial to look at computational requirements for the entire data lifecycle: starting with data prep and processing (getting the data ready for AI training), then during AI model building, training, and inference. Selecting the right compute infrastructure (or platform) for the end-to-end lifecycle and optimizing for performance has a direct impact on the TCO and hence ROI for AI projects. End-to-end data science workflows on GPUs can be up to 50x faster than on CPUs. To keep GPUs busy, data must be moved into processor memory as quickly as possible. Depending on the workload, optimizing an application to run on a GPU, with I/O accelerated in and out of memory, helps achieve top speeds and maximize processor utilization.


New leadership for a new era of thriving organizations

Leading companies today seek to become learning organizations that are continually evolving, exploring, ideating, experimenting, scaling up, executing, scaling down, and exiting across many different activities in parallel. By accelerating change and allowing for positive surprises and innovations to flourish, they consistently outperform those companies that focus instead on always trying to deliver the “perfect” plan. We are in the midst of a profound shift in how work gets done, one that asks leaders to go beyond being controllers with a mindset of certainty to becoming coaches who operate with a mindset of discovery and foster continual rapid exploration, execution, and learning. Leaders and leadership teams can learn how to set and work toward outcomes rather than traditional key performance indicators; to foster rapid experimentation and learn from both successes and setbacks; and to manage risk differently, through testing, learning, and fast adaptation. The leadership practices enabling this shift include the following:operating in short cycles of decision, action, and learning.


The Fourth Industrial Revolution is here. Here’s what it means for the way we work

Herein lies the double-edged sword of the Fourth Industrial Revolution. Although smart machines and artificial intelligence are predicted to bring unimaginable efficiencies, they will do so by increasingly replacing a wide swath of existing human jobs. While historically jobs have always been around for human beings through technological revolutions, we have never had a technological revolution that has been capable of displacing so many human beings and so much human brain power as the one we are transitioning through now. According to a report from Oxford Economics, a global forecasting and quantitative analysis firm, smart machines are expected to displace about 20 million manufacturing jobs across the world over the next decade, including more than 1.5 million in the U.S. Other studies predict that smart machines, robotics, artificial intelligence, blockchain technology, 3D printing, and automation will put 20% to 40% of existing jobs at risk over the next decades. And a report from the Brookings Institution finds that 25% of U.S. workers will face “high exposure” and risk being displaced over the upcoming few decades. 


Even Amazon can't make sense of serverless or microservices

Beyond celebrating their good sense, I think there's a bigger point here that applies to our entire industry. Here's the telling bit: "We designed our initial solution as a distributed system using serverless components... In theory, this would allow us to scale each service component independently. However, the way we used some components caused us to hit a hard scaling limit at around 5% of the expected load." That really sums up so much of the microservices craze that was tearing through the tech industry for a while: IN THEORY. Now the real-world results of all this theory are finally in, and it's clear that in practice, microservices pose perhaps the biggest siren song for needlessly complicating your system. And serverless only makes it worse. What makes this story unique is that Amazon was the original poster child for service-oriented architectures. The far more reasonable prior to microservices. An organizational pattern for dealing with intra-company communication at crazy scale when API calls beat scheduling coordination meetings. SOA makes perfect sense at the scale of Amazon. 


The impact of ChatGPT on multi-factor authentication

As adoption of AI/ML-backed tools continues to grow, it will be important to focus on key ways to mitigate the risks associated with their use. When the efficacy of identity measures that companies have trusted for decades such as voice verification and video verification erodes, strongly linked electronic identity is even more important. Phishing-resistant credential solutions such as security keys — that are hardware-backed and purpose-built around cryptographic principles — excel in these scenarios. Security keys that support FIDO2 also ensure that these credentials are tied to a specific relying party. This binding prevents attackers from preying on simple human error. With security keys, credentials are securely stored in hardware which prevents those credentials from being transferred to another system without the user’s knowledge or by accident. The use of FIDO2 authenticators also greatly reduces the efficacy of social engineering through phishing as users cannot be tricked into vending a one-time password to an attacker, or have SMS authentication codes stolen directly through a SIM swapping attack.


Three Powerful Tactics Entrepreneurs Use For Instant Confidence

Tried and tested by entrepreneurs who have faced nerves and self-doubt, reminding yourself of what you have already achieved can give your confidence levels the boost they need. Create a metaphorical cookie jar of all your business and life wins and dip in for instant assurance. Samantha from ICI CARE keeps a list of her past wins and her big picture vision on the wall where she works, ensuring they are at eye level. "By having that reminder, I win over my brain before it spirals down,” she said. “Self-doubt is normal but I keep my focus and energy on achievement.” ... Confidence is a state of mind, which means it’s also a choice. Dr Amanda Foo-Ryland, founder of Your Life Live It, knows this well, explaining that it’s also, “about how you choose to see a new situation.” She knows, “I can either be confident or choose not to be.” Like Sarceno, she incorporates visualisation into the way ahead. “If I choose to be confident, I imagine the event and see myself in it being confident, being the person I want to be. I observe myself in the movie in my head.” 


White House unveils AI rules to address safety and privacy

This new effort builds on previous attempts by the Biden administration to promote some form of responsible innovation, but to date Congress has not advanced any laws that would rein in AI. In October, the administration unveiled a blueprint for a so-called “AI Bill of Rights” as well as an AI Risk Management Framework; more recently, it has pushed for a roadmap for standing up a National AI Research Resource. The measures don’t have any legal teeth; they are just more guidance, studies and research "and they’re not what we need now," according to Avivah Litan, a vice president and distinguished analyst at Gartner Research. “We need clear guidelines on development of safe, fair and responsible AI from the US regulators,” she said. “We need meaningful regulations such as we see being developed in the EU with the AI Act. ... US regulators need to step up their game and pace." In March, Senate Majority Leader Chuck Schumer, D-NY, announced plans for rules around generative AI as ChatGPT surged in popularity. Schumer called for increased transparency and accountability involving AI technologies.


Court Dismisses FTC Complaint Against Data Broker Kochava

The FTC in its lawsuit filed last August against Idaho-based Kochava said the company invades consumers' privacy by selling advertisers geolocation data sets of mobile phone holders tied to a unique ID. That information could be used to identify individuals who have visited abortion clinics, mental health providers and other sensitive locations, the agency said. Kochava filed its own lawsuit in the same Idaho federal court weeks before the FTC's action, as a bid to preemptively counter the federal agency. The company also filed a motion last October to dismiss the FTC's lawsuit. Winmill wrote in her Thursday ruling that nothing prevents the FTC from asserting that an invasion of privacy by itself can constitute a legitimate cause for suing. The agency failed, he said, by not establishing that Kochava's business practices constitute substantial injury to consumers. "The privacy concerns raised by the FTC are certainly legitimate. Disclosing where a person has been every fifteen-minutes over a seven-day period could undoubtedly reveal information that the person would consider private, such as their travel habits, medical conditions, and social or religious affiliations," he wrote.


The Merck appeal: cyber insurance and the definition of war

The war exclusion was found to be not applicable, and the court used the insurer’s own words to detail the “why” behind the denial. When read by a layman such as me, it appears the judges believed the insurers had ample time to adjust their policy dynamics and didn’t get around to it. ... That said, when a nation’s intelligence entities run covert operations, which Russia does on a regular basis, the goal of the government at hand is to always maintain plausible deniability any illegal acts. Could the NotPetya attack have been sponsored by the Russian Federation? Absolutely, and indeed, Kroll Cyber Security, the cyber consultant for the insurers, opined before the court “with high confidence” that the attack was “orchestrated by actors working for or on behalf of the Russian Federation.” Yet, one should note that when the US Department of Justice had the opportunity to pin the tail on that same donkey, they demurred. Thus, if a national government is not going to attribute nation-state sponsorship to an attack, then it will be most difficult for an insurance entity to successfully do so within the courts without explicit verbiage in the cybersecurity exclusions.


How the influence of data and the metaverse will revolutionize businesses and industries

From machine and building performance to energy and emissions, data is the crucial link between the physical and digital worlds. It’s also the key to solving efficiency and sustainability challenges that are now more urgent than ever. If the metaverse is meant to transform business and industries, it must be built on solid data foundations. ... Digital transformation started with connecting physical assets via IoT and edge controls. Its disruptive potential has proven to carry operational and energy efficiency across all levels of an enterprise. When we introduce powerful software capabilities and start leveraging the generated data, we can create virtual representations of the real world by combining simulation, augmented reality (AR), data sharing, and visualization all at once. ... It seems that all these and many more possible applications have something in common: they are all about bringing together technologies to address challenges of the physical world, by giving real people the means to learn, collaborate, act, and essentially create value through a virtual, digitally augmented space.



Quote for the day:

"You always believe in other people. But that's easy. Sooner or later you have to believe in yourself." -- Gary, The Muppets

Daily Tech Digest - January 09, 2022

Observability: How AI will enhance the world of monitoring and management

Observability is based on Control Theory, according to Richard Whitehead, the chief evangelist at observability platform developer, Moogsoft. The idea is that with enough quality data at their disposal, AI-empowered technicians can observe how one system reacts to another, or at the very least, infer the state of a system based on its inputs and outputs. The problem is that observability is viewed in different contexts between, say, DevOps and IT. While IT has worked fairly well by linking application performance monitoring (APM) with infrastructure performance monitoring (IPM), emerging DevOps models, with their rapid change rates, are chafing under the slow pace of data ingestion. By unleashing AI on granular data feeds, however, both IT and DevOps will be able to quickly discern the hidden patterns that characterize quickly evolving data environments. This means observability is one of the central functions in emerging AIOps and MLOps platforms that promise to push data systems and applications management into hyperdrive. 


The Fourth Industrial Revolution will be people powered

while there is a common perception that digitization and automation are a threat to the world’s workers, companies at the forefront of the technology frontier have actually created jobs—different, new roles that are much more high tech than the roles of the past. And with the current labor mismatch being felt in many countries, the time is now to further engage workers for a digitally enabled future. ... This focus is backed by growing research proving that workforce engagement is key. Over the last several years, research with the World Economic Forum, in collaboration with McKinsey, surveyed thousands of manufacturing sites on their way to digitizing operations and have identified about 90 leaders. These are the lighthouses—sites and supply chains chosen by an independent panel of experts for leadership in creating dramatic improvements with technology. Together they create the Global Lighthouse Network, committed to sharing what they’ve learned along the way. 


FarmSense uses sensors and machine learning to bug-proof crops

The impact of this technology is clear. For farmers tending to fields large and small, real-time information on insects would not only be important for their financial security, but would also allow them to potentially conserve and protect critical resources, such as soil health. But FarmSense claims it wants to empower rural farmers who they say are disproportionately impacted by the damages caused by insects. Yet $300 per sensor per season is stiff, posing a potential risk to adoption and, thus, to the tech’s ability to even solve the issue of insect damage in the first place. One of the most difficult things for small scale-farmers is managing risk, said Michael Carter, the director of the USDA-funded Feed the Future Innovation Lab for Markets, Risk, and Resilience and distinguished professor of agricultural and resource economics at UC Davis. “Risk can keep people poor. It disincentives investment in technologies that would raise income on average, because the future is unknown,” Carter said. “People with low wealth obviously don’t have a lot of savings, but they can’t risk the savings to invest in something that might improve their income that also might cause their family to starve.”


Why the road to stakeholder capitalism begins with diverse boards

While the pandemic cast the notion of stakeholder capitalism in sharp relief, part of this conversation points to historic shifts from tangible to intangible assets. In the past, markets and investors measured company value using conventional financial yardsticks developed for asset-intensive businesses. This approach, however, no longer captures the full value picture – both in terms of risk and opportunity – because today it is often a company’s intangibles that are the real drivers of value. Consider certain tech and software companies: what are their significant hard assets? The growing acceptance of ESG across the financial sector shows that investors are beginning to recognize the intangibles upon which we all depend. This newfound acknowledgment and pricing in of integral things that matter – including natural, human, and social capital – points to the interconnectedness of stakeholders and the need for boards to reflect that broader view. ... This change in mindset is not insignificant. 


Hackers Have Been Sending Malware-Filled USB Sticks to U.S. Companies Disguised as Presents

While it might seem ridiculous that anyone would plug a random USB stick into their computer, studies have shown that, actually, that’s exactly what a whole lot of people do when confronted with the opportunity. Thus the popularity of the “drop” trick, in which a malicious drive is left in a company’s parking lot in the hopes that the weakest link at the firm will pick it up and, out of curiosity, plug it into their laptop. Actually, if you believe one high-ranking defense official, a disastrous, worm-fueled attack on the Pentagon in 2008 was launched just this way. Hackers have also attempted to use USBs as a vector for ransomware attacks before. Last September, it was reported that gangs had been approaching employees of particular companies and attempting to bribe them into unleashing ransomware on their company’s servers via sticks secured by the hackers. All of this is a roundabout way of saying a few basic things: Don’t accept gifts from strangers, avoid bribes, and, if you don’t know where that USB stick came from, better leave it alone.


GM says Qualcomm’s computer chips will power its next-gen ‘hands-free’ driving mode

GM first announced Ultra Cruise during an investor event last year, describing it as a massive leap over the company’s Super Cruise system, which allows for hands-free driving on mapped, divided highways. In contrast, Ultra Cruise will cover “95 percent” of driving scenarios on 2 million miles of roads in the US, the company claimed. “We’re attempting to have this feature be sort of a door-to-door driverless operation,” said Jason Ditman, chief engineer at GM, in an interview with The Verge. “When the vehicle gets onto a capable road, Ultra Cruise will automatically engage and handle the majority of the work, hands-free. Stop signs, stoplights, turns, splits, merges, freeways, subdivision... all of those domains.” That’s thanks to Qualcomm’s new high-powered processors, Ditman said. Last year, Qualcomm entered into an agreement with GM to provide computer chips for the automaker’s next generation of electric vehicles. When it comes out in 2023, the Cadillac Celestiq will be one of the first vehicles to feature the chipmaker’s new ADAS platform, which includes Qualcomm’s Snapdragon SA8540P system-on-a-chip and SA9000P artificial intelligence accelerator.


France opens access to quantum computing to researchers, start-ups

The aim is to make this technology accessible to as many people as possible, including the scientific community and French and EU start-ups. The intent is to ensure France does not miss out on the major advances quantum computing could make in the decades to come. The platform will be installed at the High-Performance Computing Centre at the French Atomic Energy Commission (CEA). “By mid-2022, we will open a procedure […] for the purchase of two to three quantum hardware machines that are integrated into the platform,” said O, adding that two other calls for tender are planned over the next three years”. The platform has a total budget of €170 million and is part of the €1.8 billion national quantum strategy, inaugurated on 21 January 2021 by President Emmanuel Macron, who is keen to make this technology a major issue for France’s sovereignty, strategic superiority and independence. According to Paris, France wants to become one of the world’s leading powers in the field, but the intention is not to sideline its European neighbours. 


The real value of 5G and cloud computing

At the essence of cloud and 5G is that we can leverage cloud-based resources mixed with enterprise resources. Let’s face facts: We’re heading to an enterprise IT future of multicloud meets hybrid cloud meets edge computing meets complex and dynamic applications that run anywhere and everywhere. Thus, the future becomes less about the cloud and more about new and emerging ways to leverage all technology, cloud or not, that will be widely distributed and complex. The automation of 5G allows for the orchestration of systems in different network domains, be it your phone, desktop, TV, an enterprise server in a datacenter, or a public cloud provider. We’re quickly moving to not caring about where something runs but needing that something to migrate automatically to optimize how it runs and scales, cloud or no cloud. Also, this won’t happen unless we automate security and network provisioning as well. The bottom line is that 5G coupled with cloud computing can bring much more computing power to many more people and their companies.


Putting stakeholder capitalism into practice

The real issue is the trade-offs between short-termism and long-termism. Eighty percent of CFOs tell us in surveys that they would reduce discretionary spending on potentially high-NPV [net present value] activities like R&D and marketing to achieve short-term earnings targets. They are literally sacrificing the long term for the short term. Yet research shows companies that think long-term—meaning five to seven years ahead—substantially outperform, achieving 47 percent higher revenue growth over a 15-year period, for example. Stakeholder and shareholder interests do align in the long term. If you have happy employees, collaborative suppliers, satisfied regulators, and devoted consumers, then they will help you deliver higher benefits over a longer-term period. It is hard to satisfy everybody in the short term; you may have to make trade-offs, for example, between purpose and profit. But in the long term, we don’t believe this trade-off exists. The crystalizing concept here is purpose-driven ESG. Companies wondering how to deliver on the long-term stakeholder goals should start by asking the questions, “What is our purpose? What would the world lose if our company disappeared?”


AI is quietly eating up the world’s workforce with job automation

AI is also automating jobs in customer service, accounting, and a host of other professions. For instance, companies like Thankful, Yext, and Forethought use AI to automate customer support. This shift is often imperceptible to the customer, who doesn’t know if they’re speaking to a biological intelligence or a machine. The rise of AI-powered customer service has big implications for the workforce. It’s estimated that 85 percent of customer interactions are already handled without human interaction. According to the Bureau of Labor Statistics, there are nearly 3 million customer service representatives employed in the United States. Many of these jobs are at risk of being replaced by AI. When jobs like these are automated away, the question is: Where do the displaced workers go? The answer is not clear. It’s possible that many of these workers will be re-employed in other fields. But it’s also possible that they will become unemployed, and that the economy will struggle to absorb them.



Quote for the day:

"Little value comes out of the belief that people will respond progressively better by treating them progressively worse." -- Eric Harvey

Daily Tech Digest - April 19, 2021

Time to Modernize Your Data Integration Framework

You need to be able to orchestrate the ebb and flow of data among multiple nodes, either as multiple sources, multiple targets, or multiple intermediate aggregation points. The data integration platform must also be cloud native today. This means the integration capabilities are built on a platform stack that is designed and optimized for cloud deployments and implementation. This is crucial for scale and agility -- a clear advantage the cloud gives over on-premises deployments. Additionally, data management centers around trust. Trust is created through transparency and understanding, and modern data integration platforms give organizations holistic views of their enterprise data and deep, thorough lineage paths to show how critical data traces back to a trusted, primary source. Finally, we see modern data analytic platforms in the cloud able to dynamically, and even automatically, scale to meet the increasing complexity and concurrency demands of the query executions involved in data integration. The new generation of some data integration platforms also work at any scale, executing massive numbers of data pipelines that feed and govern the insatiable appetite for data in the analytic platforms.


Will codeless test automation work for you?

While outsiders view testing as simple and straightforward, it's anything but true. Until as recently as the 1980s, the dominant idea in testing was to do the same thing repeatedly and write down the results. For example, you could type 2+3 onto a calculator and see 5 as a result. With this straightforward, linear test, there are no variables, looping or condition statements. The test is so simple and repeatable, you don't even need a computer to run this test. This approach is born from thinking akin to codeless test automation: Repeat the same equation and get the same result each time for every build. The two primary methods to perform such testing are the record and playback method, and the command-line test method. Record and playback tools run in the background and record everything; testers can then play back the recording later. Such tooling can also create certification points, to check the expectation that the answer field will become 5. Record and playbook tools generally require no programming knowledge at all -- they just repeat exactly what the author did. It's also possible to express tests visually. Command-driven tests work with three elements: the command, any input values and the expected results.


Ghost in the Shell: Will AI Ever Be Conscious?

It’s certainly possible that the scales are tipping in favor of those who believe AGI will be achieved sometime before the century is out. In 2013, Nick Bostrom of Oxford University and Vincent Mueller of the European Society for Cognitive Systems published a survey in Fundamental Issues of Artificial Intelligence that gauged the perception of experts in the AI field regarding the timeframe in which the technology could reach human-like levels. The report reveals “a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.” Futurist Ray Kurzweil, the computer scientist behind music-synthesizer and text-to-speech technologies, is a believer in the fast approach of the singularity as well. Kurzweil is so confident in the speed of this development that he’s betting hard. Literally, he’s wagering Kapor $10,000 that a machine intelligence will be able to pass the Turing test, a challenge that determines whether a computer can trick a human judge into thinking it itself is human, by 2029.


Is your technology partner a speed boat or an oil tanker?

The opportunity here really cannot be underestimated. It is there for the taking by organisations who are willing to approach technological transformation in a radically different way. This involves breaking away from monolithic technology platforms, obstructive governance procedures, and the eye-wateringly expensive delivery programmes so often facilitated by traditional large consulting firms. The truth is, you simply don’t need hundreds of people to drive significant change or digital transformation. What you do need is to adopt new technology approaches, re-think operating models and work with partners who are agile experts, who will fight for their clients' best interests and share their knowledge to upskill internal staff. Hand picking a select group of top individuals to work in this way provides a multiplier of value when compared to hiring greater numbers of less experienced staff members. Of course, external partners must be able to deliver at the scale required by the clients they work with. But just as large organisations have to change in order to embrace the benefits of the digital age, consulting models too must adapt to offer the services their clients need at the value they deserve.


Best data migration practices for organizations

the internal IT team needs to work closely with the service provider. To thoroughly understand and outline the project requirements and deliverables. This is to ensure that there is no aspect that is overlooked, and both sides are up to speed on the security and regulatory compliance requirements. Not just the vendor, but the team members and all the tools used in the migration need to meet all the necessary certifications to carry out a government project. Of course, certain territories will have more stringent requirements than others. Finally, an effective transition or change management strategy will be important to complete the transition. Proper internal communications and comprehensive training for employees will help everyone involved be aware of what’s required from them, including grasping any new processes or protocols and circumnavigating any productivity loss during the data migration. While the nitty-gritty of a public sector migration might be similar to a private company’s, a government data migration can be a much longer and unwieldy process, especially with the vast number of people and the copious amounts of sensitive data involved.


Will AI dominate in 2021? A Big Question

Agreeing with the fact that the technologies are captivating us completely with their interesting innovations and gadgets. From Artificial intelligence to machine learning, IoT, big data, virtual and augmented reality, Blockchain, and 5G; everything seems to take over the world way too soon. Keeping it to the topic of Artificial Intelligence, this technology has expanded its grip on our lives without even making us realize that fact. In the days of the pandemic, the IT experts kept working from home and the tech-grounds kept witnessing smart ideas and AI-driven innovations. Artificial Intelligence is also the new normal. Artificial Intelligence is going to be the center of our new normal and it will be driving the other nascent technologies to the point towards success. Soon, AI will be the genius core of automated and robotic operations. In the blink of an eye, Artificial Intelligence can be seen adopted by companies so rapidly and is making its way into several sectors. 2020 has seen this deployment on a wider scale as the AI experts were working from home but the progress didn’t see a stop in the tech fields.


The promise of the fourth industrial revolution

There are some underlying trends in the following vignettes. The internet of things and related technologies are in early use in smart cities and other infrastructure applications, such as monitoring warehouses, or components of them, such as elevators. These projects show clear returns on investment and benefits. For instance, smart streetlights can make residents’ lives better by improving public safety, optimizing the flow of traffic on city streets, and enhancing energy efficiency. Such outcomes are accompanied with data that’s measurable, even if the social changes are not—such as reducing workers’ frustration from spending less time waiting for an office elevator. Early adoption is also found in uses in which the harder technical or social problems are secondary, or, at least, the challenges make fewer people nervous. While cybersecurity and data privacy remain important for systems that control water treatment plants, for example, such applications don’t spook people with concerns about personal surveillance. Each example has a strong connectivity component, too. None of the results come from “one sensor reported this”—it’s all about connecting the dots. 


How Hundred-Year-Old Enterprises Improve IT Ops using Data and AIOps

Sam Chatman, VP of IT Ops at OneMain Financial, explains the impact of levering AIOps is, “Being able to understand what is released, when it’s released, and the potential impacts of that release. We are overcoming alert fatigue, and BigPanda will be our Watson of the Enterprise Monitoring Center (EMC) by automating alerts, opening incident tickets, and identifying those actions to improve our mean time to recovery. This helps us keep our systems up when our users and customers need them to be.” For other organizations, it might help to visualize what naturally happens to IT operations’ monitoring programs over time. Every time systems go down and IT gets thrown under the bus for a major incident, they add new monitoring systems and alerts to improve their response times. As new multicloud, database, and microservice technologies emerged, they add even more monitoring tools and increased observability capabilities. Having more operational data and alerts is a good first step, but then alert fatigue kicks in when tier-one support teams respond and must make sense over dozens to thousands of alerts.


A perfect storm: Why graphics cards cost so much now

Demand for gaming hardware blew up during the pandemic, with everyone bored and stuck at home. In the early days of the lockdowns in the United States and China, Nintendo’s awesome Switch console became red-hot. Even replacement controllers and some games became hard to find. ... Beyond the AMD-specific TSMC logjam, the chip industry in general has been suffering from supply woes. Even automakers and Samsung have warned that they’re struggling to keep up with demand. We’ve heard whispers that the components used to manufacture chips—from the GDDR6 memory used in modern GPUs to the substrate material fundamentally used to construct chips—have been in short supply as well. Seemingly every industry is seeing vast demand for chips of all sorts right now. ... High demand and supply shortages are the perfect recipe for folks looking to flip graphics cards and make a quick buck. The second they hit the streets, the current generation of GPUs were set upon by “entrepreneurs” using bots to buy up stock faster than humans can, then selling their ill-gotten wares for a massive markup on sites like Ebay, StockX, and Craigslist.


How to sharpen machine learning with smarter management of edge cases

Production is when AI models prove their value, and as AI use spreads, it becomes more important for businesses to be able to scale up model production to remain competitive. But as Shlomo notes, scaling production is exceedingly difficult, as this is when AI projects move from the theoretical to the practical and have to prove their value. “While algorithms are deterministic and expected to have known results, real world scenarios are not,” asserts Shlomo. “No matter how well we will define our algorithms and rules, once our AI system starts to work with the real world, a long tail of edge cases will start exposing the definition holes in the rules, holes that are translated to ambiguous interpretation of the data and leading to inconsistent modeling.” That’s much of the reason why more than 90% of c-suite executives at leading enterprises are investing in AI, but fewer than 15% have deployed AI for widespread production. Part of what makes scaling so difficult is the sheer number of factors for each model to consider. In this way, HITL enables faster, more efficient scaling, because the ML model can begin with a small, specific task, then scale to more use cases and situations.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - December 07, 2020

API3: The Glue Connecting the Blockchain to the Digital World

dAPIs are on-chain data feeds that are comprised of aggregated responses from first-party (API provider-operated) oracles. This allows for the removal of many vulnerabilities, unnecessary redundancies, and middleman taxes created by existing third-party oracle solutions. Further, using first-party oracles leverages the off-chain reputation of the API provider (compare this to the nonexistent reputation of anonymous third-party oracles). See our article “First-Party vs Third-Party Oracles” for a more extended treatise on these issues. Further, dAPIs are data feeds built with transparency. What we mean by this is: you know exactly where the data comes from — this ensures things like data quality as well as independence of data sources to mitigate skewness in aggregated results. Rather than having oracle-level staking — which is impractical and arguably infeasible for reasons alluded to in this article — API3 has a staking pool. API3 holders can provide stake to the protocol. This stake backs insurance services that protect users from potential damages caused by dAPI malfunctions. The collateral utility has the participants share API3’s operational risk and incentivizes them to minimize it. Staking in the protocol also grants stakers inflationary rewards and shares in profits.


Reconciling political beliefs with career ambitions

Data has been on the front lines in recent culture wars due to accusations of racial, gender, and other forms of socioeconomic bias perpetrated in whole or in part through algorithms. Algorithmic biases have become a hot-button issue in global society, a trend that has spurred many jurisdictions and organizations to institute a greater degree of algorithmic accountability in AI practices. Data scientists who’ve long been trained to eliminate biases from their work now find their practices under growing scrutiny from government, legal, regulatory, and other circles. Eliminating bias in the data and algorithms that drive AI requires constant vigilance on the part of not only data scientists but up and down the corporate ranks. As Black Lives Matter and similar protests have pointed out, data-driven algorithms can embed serious biases that harm demographic groups (racial, gender, age, religious, ethnic, or national origin) in various real-world contexts. Much of the recent controversy surrounding algorithmic biases has focused on AI-driven facial recognition software. Biases in facial recognition applications are especially worrisome if used to direct predictive policing programs or potential abuse by law enforcement in urban areas with many disadvantaged minority groups.


Why Data Privacy Is Crucial to Fighting Disinformation

In essence, if you can create a digital clone of a person, you can much better predict his or her online behavior. That’s a core part of the monetization model of social media companies, but it could become a capability of adversarial states who acquire the same data through third parties. That would enable much more effective disinformation. A new paper from the Center For European Analysis, or CEPA, also out on Wednesday, observes that while there has been progress against some tactics that adversaries used in 2016, policy responses to the broader threat of micro-targeted disinformation “lag.” “Social media companies have concentrated on takedowns of inauthentic content,” wrote authors Alina Polyakova and Daniel Fried. “That is a good (and publicly visible) step but does not address deeper issues of content distribution (e.g., micro-targeting), algorithmic bias toward extremes, and lack of transparency. The EU’s own evaluation of the first year of implementation of its Code of Practice concludes that social media companies have not provided independent researchers with data sufficient for them to make independent evaluations of progress against disinformation.” Polyakova and Fried suggest the U.S. government make several organizational changes to counter foreign disinformation.


How to assess the transformation capabilities of intelligent automation

We’re talking about smart, multi-tasking robots that are increasingly being trusted catalysts at the core of digital work transformation strategies. This is because they effortlessly perform joined up, data-driven work across multiple operating environments of complex, disjointed, difficult to modify legacy systems and manual workflows. And unlike any other robot, they deliver work without interruption, automatically making adjustments according to obstacles – different screens, layouts or fonts, application versions, system settings, permissions, and even language. These robots also uniquely solve the age old problem of system interoperability by reading and understanding applications’ screens in the same way humans do. They’re re-purposing the human interface as a machine-usable API – crucially without touching underlying system programming logic. This ‘universal connectivity’ also means that all current and future technologies can be used by robots – without the need of APIs, or any form of system integration. ... This capability breathes new life into any age of technology and enables these robots to be continually augmented with the latest cloud, artificial intelligence, machine learning, and cognitive capabilities that are simply ‘dragged and dropped’ into newly designed work process flows.


Basics of the pairwise, or all-pairs, testing technique

All-pairs testing greatly reduces testing time, which in turn controls testing costs. The QA team only checks a subset of input/output values -- not all -- to generate effective test coverage. This technique proves useful when there are simply too many possible configuration options and combinations to run through. Pairwise testing tools make this task even easier. Numerous open source and free tools exist to generate pairwise value sets. The tester must inform the tool about how the application functions for these value sets to be effective. With or without a pairwise testing tool, it's crucial for QA professionals to analyze the software and understand its function to create the most effective set of values. Pairwise testing is not a no-brainer in a testing suite. Beware these factors that could limit the effectiveness of all-pairs testing: unknown interdependencies of variables within the software being tested; unrealistic value combinations, or ones that don't reflect the end user; defects that the tester can't see, such as ones that don't reflect in a UI view but trigger error messages into a log or other tracker; and tests that don't find defects in the back-end processing engines or systems. 


How can companies secure a hybrid workforce in 2021?

Even before remote work was ubiquitous, accidental and malicious insider threats posed a serious risk to data security. As trusted team members, employees have unprecedented access to company and customer data, which, when left unchecked, can undermine company, customer, and employee privacy. These risks are magnified by remote work. Not only has the pandemic’s impact on the job market made malicious insiders more likely to capture or compromise data to gain leverage with new employment prospects or to generate extra income, but accidental insiders are especially prone to errors when working remotely. For example, many employees are blurring the lines between personal and professional technology, sharing or accessing sensitive data in ways that could undermine its integrity. In response, companies need to be proactive about establishing and enforcing clear data management guidelines. In this regard, communication is key, and accountability through monitoring initiatives or other efforts will help keep data protected during the transition.


Working from home dilemma: How to manage your team, without the micro-management

Employees need to feel connected and trusted. Yet leaders who find it tough to trust their workforce might opt for micro-management; they'll continue to check-up on their workers rather than checking-in to see how they're getting on. Peterson says leaders should look to develop a management style that cultivates wellbeing. In uncertain times, employees need a sense of certainty from their leaders. Executives should ensure their staff feel engaged, not micro-managed. "It's more important than ever for managers to ask whether people are getting their ABCs: their autonomy, belonging and competence. Leaders who don't get that from their own boss will tend to overcompensate with the people they're managing; they'll micro-manage, and that's not helpful," he says. Lily Haake, head of the CIO Practice at recruiter Harvey Nash, agrees that leaders who micro-manage will struggle in the new normal. They won't get the best from the workers and their effectiveness will suffer. Haake says managers who want to cultivate wellbeing need to pick up on subtle signs that all isn't well. Executives should adopt a considered approach, using a technique like active listening, to pick up on potential issues before they become major problems.


The Fourth Industrial Revolution: Legal Issues Around Blockchain

Stakeholders in blockchain solutions will need to ensure that their products comply with a legal and regulatory framework that was not conceived with this technology in mind. From a commercial law standpoint, smart contracts must be contemplated for negotiation, execution and administration on a blockchain, and in a legal and compliant fashion. Liability needs to be addressed. What if the contract has been miscoded? What if it does not achieve the parties' intent? The parties must also agree on applicable law, jurisdiction, proper governance, dispute resolution, privacy and more. There are public policy concerns that should be taken into account in shaping new laws, rules and regulations. For example, permissionless blockchains can be used for illegal purposes such as money laundering or circumventing competition laws. Also, participants may be exposed to irresponsible actions on the part of the "miners" who create new blocks. Unfortunately, there aren't any current legal remedies for addressing corrupt miners. As lawyers and technologists ponder these issues, several solutions are being bandied about. One possible remedy involves a hybrid of permissioned and permissionless blockchains.


Why enterprises are turning from TensorFlow to PyTorch

PyTorch is seeing particularly strong adoption in the automotive industry—where it can be applied to pilot autonomous driving systems from the likes of Tesla and Lyft Level 5. The framework also is being used for content classification and recommendation in media companies and to help support robots in industrial applications. Joe Spisak, product lead for artificial intelligence at Facebook AI, told InfoWorld that although he has been pleased by the increase in enterprise adoption of PyTorch, there’s still much work to be done to gain wider industry adoption. “The next wave of adoption will come with enabling lifecycle management, MLOps, and Kubeflow pipelines and the community around that,” he said. “For those early in the journey, the tools are pretty good, using managed services and some open source with something like SageMaker at AWS or Azure ML to get started.” ... “The TensorFlow object detector brought memory issues in production and was difficult to update, whereas PyTorch had the same object detector and Faster-RCNN, so we started using PyTorch for everything,” Alfaro said. That switch from one framework to another was surprisingly simple for the engineering team too.


Techno-nationalism isn’t going to solve our cyber vulnerability problem

Techno-nationalism is fueled by a complex web of justified economic, political and national security concerns. Countries engaging in “protectionist” practices essentially ban or embargo specific technologies, companies, or digital platforms under the banner of national security, but we are seeing it used more often to send geopolitical messages, punish adversary countries, and/or prop up domestic industries. Blanket bans give us a false sense of security. At the same time, when any hardware or software supplier is embedded within critical infrastructure – or on almost every citizen’s phone – we absolutely need to recognize the risk. We need to take seriously the concern that their kit could contain backdoors that could allow that supplier to be privy to sensitive data or facilitate a broader cyberattack. Or, as is the lingering case with TikTok, the concern is whether the collection of data on U.S. citizens via an entertainment app could be forcibly seized under Chinese law and enable state-backed cyber actors to then target and track federal employees or conduct corporate espionage.



Quote for the day:

"Stand up for what you believe, let your team see your values and they will trust you more easily." -- Gordon Tredgold

Daily Tech Digest - December 15, 2019

5 Key Insights From Intel’s New “Accelerate Industrial”

Manager Technical Industrial Engineer working and control robotics with monitoring system software and icon industry network connection on tablet. AI, Artificial Intelligence, Automation robot arm
A technical skills gap stands out as the number one obstacle to a successful digital transformation—flagged as crucial by over a third of respondents. Intel’s report highlights a dramatic shift in the mix of skills needed for success: Manufacturing companies believe the top 5 skills they will need for future growth are all digital skills, from data science to cybersecurity. Manufacturing skills, ranked as today’s second most valuable ability, rank only # 6 when looking at the future. Crucial will be the workforce’s “digital dexterity”, that is the ability to understand both the manufacturing process and the new digital tools. To leverage the full value of digital-industrial innovations, companies will need to truly meld digital technologies into their manufacturing processes, and this requires a workforce fluent in both sets of skills. ... The skills gap represents a tremendous challenge for companies. At the moment, companies are trying to address the gap by setting up training programs in specific digital skills. This, however, will not be enough.



Blood test combined with AI program could speed up diagnosis of brain tumors

Dr Brennan has worked with Dr Matthew Baker, reader in chemistry at the University of Strathclyde, UK, and chief scientific officer at ClinSpec Diagnostics Ltd to develop a test to help doctors to quickly and efficiently find those patients who are most likely to have a brain tumor. The test relies on an existing technique, called infrared spectroscopy, to examine the chemical makeup of a person's blood, combined with an AI program that can spot the chemical clues that indicates the likelihood of a brain tumor. The researchers tried out the new test on blood samples taken from 400 patients with possible signs of brain tumor who had been referred for a brain scan at the Western General Hospital in Edinburgh, UK. Of these, 40 were subsequently found to have a brain tumor. Using the test, the researchers were able to correctly identify 82% of brain tumors. The test was also able to correctly identify 84% of people who did not have brain tumors, meaning it had a low rate of 'false positives. In the case of the most common form of brain tumor, called glioma, the test was 92% accurate at picking up which people had tumors.


Google rolls out Verified SMS and Spam Protection in Android

google-verified-sms-and-spam.png
As the name of the first feature hints, Verified SMS works by confirming the identity of the SMS sender. "When a message is verified-which is done without sending your messages to Google-you'll see the business name and logo as well as a verification badge in the message thread," said Roma Slyusarchuk, a Google Software Engineer on the Messages app. The Verified SMS will only be used to verify the authenticity of SMS messages sent by businesses. It won't verify and add a verification badge to messages sent by normal users. Google said it created this feature to help users trust the messages they receive, especially for "things like one-time passwords, account alerts or appointment confirmations." The Android OS maker didn't explain how the new feature works, but it did say that it should be able to detect SMS messages sent from random numbers, previously not associated with a company, and consequently, help prevent some phishing attacks.


Slow Down to Do More: “Leave room in your schedule for the unexpected” 

One of the biggest problems with rushing through things, both in work and in life, is that it increases the likelihood that you’ll make a mistake. Multitasking is a skill so many people want to fully harness, but the reality is that studies have shown that trying to focus on several tasks at once doesn’t allow you to do any of the tasks as well, and it doesn’t save you time. It can actually waste time because when you switch from one task to another, your brain must refocus. This requires additional time if you’re constantly switching back and forth, compared to if you just focus on one task at a time. In addition, people who rush through their work tend to have higher stress levels, which can lead to more health problems and a lower level of happiness. Finally, we need to find time to take some distance from our work, to take the high ground and just to think. We are constantly consumed by distractions, and when we take the time to break from the norm, and create room for thoughts to ideate, we will be considerably more productive, healthier and happier.


Agile Estimation — Prerequisites for Better Estimates

measuring tape
Someone might consider all aspects of functional requirements, nonfunctional requirements to estimate it as big. Another person might estimate as low without considering nonfunctional requirements like security, performance, etc. It also depends on your delivery best practices, if you consider Unit Test, Automation, Accessibility, device support are part of doneness criteria then the estimate would be different. Of course, I definitely recommend all these best practices are part of your estimation. These best practices are a must for quality and better maintenance. They will cut down the cost in the long run. ... The development team and product management team must be on the same page. The development team must understand the business goals equally with the Product management team. Also, understand the objectives of the product management team and identify must-have requirements for supporting business growth. It will help you to decide the type of architecture foundation required. As per business goals, the expectation is going bigger in terms of size (user, data footprint) in the road map then the architecture will have to be different from the shorter business goals.


Q&A on the Book Building Digital Experience Platforms

Digital Experience Platforms are integrated set of technologies that aim to provide user-centric engaging experience, improve productivity, accelerate integration and deliver a solution in quick time. Digital Experience Platforms are based on platform philosophy so that they can easily extend and be scaled to future demands of innovation, and continuously adapt to the changing trends of technology. Enterprises can have solid integrated foundation for all the applications, which meets the needs of organizations going through digital transformation and provides a better customer experience across all touchpoints. DXPs package the most essential set of technologies, such as content management, portals, and ecommerce, which are necessary to digitize the enterprise operations and play a crucial role in the digital transformation journey. DXPs offer inbuilt features such as presentation, user management, content management, personalization, analytics, integrations, SEO, campaign management, social and collaboration, and search, among others.


4 Robotic Process Automation Trends For 2020

Robotic Process Automation
For a long time, prognosticators have anticipated a future with robots and intelligent elements running the world to the detriment of human laborers. Employment losses, they anticipated, would be unavoidable as AI did things quicker, more brilliant and with less HR headaches. As indicated by the HBR report that concentrated the effect of different RPA implementations demonstrated that supplanting administrative employees was neither the essential goal nor a typical result in 47% of the activities they contemplated. Truth be told, just a bunch of those RPA projects prompted decreases in headcount, and much of the time, the tasks had just been moved to outside workers. RPA bots that are intended to adjust to changing conditions and automatically deal with the correct response quickly. RPA is most normally thought of as a productivity and effectiveness tool. Decreasing or taking out tedious manual procedures is an effectiveness unto itself. RPA and different types of automation will turn into an increasingly obvious piece of data security methodologies, not on the grounds that a multitude of bots will be battling threats on the front lines, but since they can help lessen the most universal risk of all: human mistake.



Angular Breadcrumbs with Complex Routing and Navigation

The UI structure of the breadcrumbs on any serious website looks simple. But the underlying code logic, operation rules, and navigation workflow are not simple at all due to related routing complexities and navigation varieties. This article will demonstrate a sample application with full-featured breadcrumbs and discuss the resolutions of implementing and testing issues. The sample application that can be downloaded with the above links is the modified version of the original Heroes Example from the Angular document Routing & Navigation. I wouldn’t like to reinvent wheels for creating my sample application from scratch. The Heroes Example covers most routing patterns and types, hence, can be a base source for adding breadcrumb features. It, however, is not enough for demonstrating the realistic breadcrumbs with complex navigation scenarios and workflow completeness. The modification tasks involve adding more pages with corresponding navigation routes, changing UI structures and styles, fixing active router link issues with custom alternatives, updating code logic for authenticated session creation and persistence, just to mention a few.


Blockchain Prediction: 2020 Will Enable Levels of Data Trust


It will seem counterintuitive to most CISOs and other security professionals to hear that something public is more secure. Enterprises often prefer to operate in their walled garden and at first will be skeptical of public ledgers. But this stance will change over time. It is somewhat analogous to what happened with intranets and the internet. At first, enterprises only wanted systems connected internally (intranet), but eventually realized the value in connecting to external networks (internet) as well. Interest in blockchain has also germinated a vibrant research community that’s looking into novel cryptographic techniques such as zero-knowledge proofs, trusted computing platforms, verifiable delay functions and other innovative “cryptoeconomic” tools. As this research moves from the lab to the data center, we anticipate that these technologies will make computing more secure and private than ever before.  Security has always been a priority, but more recently privacy. Individuals aren’t in control of their data. From your healthcare data to browsing history, your data is at risk of being exposed or worse, manipulated.


Two Critical Questions for your Enterprise Blockchain Application

question-mark-graffiti
Any data going on a public chain are open, accessible, and irrevocable. Thus, public blockchain is not GDPR (and CCPA from next year) compliant, unless the data has been encoded with quantum-resistant algorithms and stored. Personally Identifiable Information (PII) or sensitive data compromising user privacy should not be stored on a blockchain. However, blockchain still needs account aka wallet addresses to individually link them with their real users ... The performance of software directly depends on the performance of its dependencies and their host environments. Blockchain brings a new paradigm of decentralization architecture, where every node on the chain constantly updates its states to maintain the world state. In addition to that, a blockchain application also needs to deal with the following issues and their varied implementations. ... A blockchain relies on the distributed consensus of participant nodes. The PoW (Proof of Work) consensus takes more time to achieve a consensus across the system based on the finality gadget watermark compared to any PoS (Proof of Stake) system.



Quote for the day:


"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr