Showing posts with label digital health. Show all posts
Showing posts with label digital health. Show all posts

Daily Tech Digest - November 24, 2025


Quote for the day:

"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn



The incredible shrinking shelf life of IT skills

IT workers have seen the half-life of IT skills compressed even more dramatically, with researchers saying some skills today go from hot to not in less than two years — sometimes mere months. It’s putting a lot of pressure on IT teams. As Anand says, “Technology is developing faster than tech workers can upskill.” Ever-quickening churn in the IT skills market is upending more than individuals’ career plans, too. It is impacting the entire IT function and the organization as a whole. That in turn is forcing CIOs, HR leaders, and other executives to devise strategies to create an environment where workers are capable of reinvention at a rapid clip. ... CIOs and IT advisers also say the shortening shelf life of skills is not experienced universally, as some organizations still have a lot of legacy tech in place. Data from the 2025 Tech Salary Report from Dice, a job-searching platform for tech professionals, hints at these dual realities. ... “Certain skills will come up very quickly and then go away very quickly, so now that person has to be seen as someone who can build up skills quickly,” he adds. Info-Tech Research Group’s Leier-Murray says CIOs must free up time for their staffers to upskill and provide more coaching to their team members to ensure they keep pace with the work demands of a modern IT shop. She and others advise CIOs to hire workers with or cultivate in existing staffers a growth mindset.
 ... “The way that everybody is working is continuously being redefined,” Jones says.


Are Organizations Overinvesting in an AI Bubble? - Part 1

Demand for generative AI reasoning is driving investment, said Arun Chandrasekaran, distinguished vice president analyst at Gartner. "These partnerships signal the model providers' insatiable need for compute to satisfy the enormous growth and usage, mainly in the consumer AI space." When asked to confirm an AI bubble, Chandrasekaran said, "It is hard to predict if there is a bubble and when it will burst. But we'll likely see a correction and shake-out among players that can't deliver value to users and build profitable growth strategies." Continuous investment with a large amount of money being invested, at high valuations for AI companies, "is unsustainable," Umesh Padval, investor, entrepreneur and former managing director of Thomvest Ventures, told Information Security Media Group. ... "Enterprises are excited about gen AI's speed of delivery. However, the punitively high cost of maintaining, fixing or replacing AI-generated artifacts such as code, content and design can erode gen AI's promised return on investments," Chandrasekaran said. "By establishing clear standards for reviewing and documenting AI-generated assets and tracking technical debt metrics in IT dashboards, enterprises can take proactive steps to prevent costly disruptions." Chandrasekaran warns about overinvestment without determining the "value path." He said organizations should realize that the expected payoff, including ROI, is much more long term, which can lead to risks.


The CISO’s greatest risk? Department leaders quitting

The trend of talented and dedicated functional security leaders quietly eyeing the exit is not an anomaly — it’s a predictable outcome of systemic issues that have been building within the profession for years, says Brandyn Fisher, V-CISO capability lead at Centric Consulting. “As CISOs, we are seeing our most critical layer of management, our directors and senior managers, burn out,’’ Fisher says. “This isn’t happening in a vacuum. It’s the result of a dangerous convergence of unrealistic expectations, resource starvation, and a fundamentally broken career model.” Security leaders operate on an unsustainable premise, Fisher says. “We expect our leaders to be right every single time, while an attacker only needs to be right once. This creates a culture of hyper-vigilance that is simply not sustainable 24/7/365.” ... Another issue is tool creep, with 40-plus security tools managing the same alerts and poor integrations, Malik says. There is also “role overload and context switching” on projects, as well as relentless audit cycles, reviews, and meetings, which Malik says leaves little time for career development. “Many organizations have a CISO plus a flat layer of ‘heads of X’” who don’t always have a clear path to moving into higher levels, she says. And CISOs are constantly asking their leaders to do more with less, Fisher adds. “As cybersecurity is still widely viewed as a cost center rather than a business enabler, budgets are the first to be slashed while the threat landscape grows exponentially,’’ he says.


Preparing for the Next Wave of AI: Agentic Workflows

Agentic AI blends intelligence and automation into a single operational layer that can manage outcomes rather than just execute steps. Instead of relying on humans to define every possible rule, agentic systems understand goals and context. They can reason through multiple inputs, choose the best path forward, and adapt as conditions change. ... Optimizing for agentic AI isn’t just about adding smarter tools, it begins re-architecting the environment those tools inhabit. Organizations that thrive will have integrated, high-quality data foundations and unified workflows. Fragmented systems or poor data hygiene can cripple an AI agent’s ability to reason effectively. For many enterprises, this means modernizing their systems of record – CRMs, ERPs, and HR platforms – that make up digital operations. Equally important is the need for well-defined guardrails. Businesses must define what good decisions look like, the limits of an agent’s autonomy, and the ethical or compliance constraints that must be followed. This balance between freedom and control is critical. Too many restrictions, and the AI can’t act usefully, but too few and it risks acting outside the organization’s intentions. ... On the flip side, unclear use cases/business value was the top answer for other respondents. While both groups cited risk and compliance concerns as a top challenge, it’s clear there’s a divide on where employees fit into the agentic AI puzzle.


The privacy tension driving the medical data shift nobody wants to talk about

Current frameworks lock data into silos. These isolated systems make it difficult to combine information across hospitals, labs, and research groups. This limits what can be learned from real-world evidence, which is especially important for improving treatments, studying outcomes, and reducing costs. ... Outdated rules can worsen inequities by limiting access to new tools and restricting research to well-funded institutions. This contradicts the principle of justice, which is meant to promote fairness and access. The authors emphasize that privacy still matters. They write that, “privacy protections exist for many reasons, addressing risks to individual patients as well as the public at large.” But they argue that privacy cannot stand alone as the primary value in a system where data powers both scientific progress and new forms of risk. ... The most significant proposal in the research is a gradual move toward an open data model. In this approach, healthcare data would be treated as a shared resource rather than locked property. Access would come with responsibilities and consequences for misuse instead of blanket restrictions on legitimate use. ... A key argument is that penalties should target bad behavior rather than access. Current rules assume data must be kept behind walls to prevent harm, even though perfect anonymization is no longer possible. The researchers argue that the system should focus on preventing malicious reidentification and unethical use. This approach, they say, is more realistic and gives space for innovation. 


The expanding role of the CISO

New research from HackerOne has revealed that 84 per cent of CISOs are now responsible for AI security, while 82 per cent are charged with protecting data privacy. The result is an already burdened CISO being asked to monitor and secure technologies that are evolving at breakneck speed. New technology is constantly being implemented across businesses, and when complex technologies such as AI are adopted by 78 per cent of organisations – a 23 per cent increase from the previous year – the scale and intensity of the task become clear. This rapid adoption, often driven by different parts of the business eager for a competitive edge, creates entirely new attack surfaces which must remain under constant surveillance to ensure no security risks go unnoticed. For a CISO, this task can seem insurmountable – even the most skilled internal teams will struggle if they lack the specialised knowledge. Faced with a variety of unique vulnerabilities, CISOs will need the right tools and support in order to keep the business safe. ... Unfortunately, the lack of talent and resources serves as a significant barrier to adopting this full-scale offensive security programme, with 39 per cent of CISOs highlighting this lack of skilled personnel as a major challenge. On a global scale, the cybersecurity industry urgently needs around four million more professionals to bridge the current gap in key roles. However, taking a crowdsourced security approach offers a powerful, scalable solution for businesses to tackle this problem. 


A Day in the Life of a Connected Patient: How Real-Time Data Is Powering Smarter Care

Health data arrives in bursts and fragments. It comes from different tools, moves at different speeds, and rarely follows the same format. Making sense of it all takes more than storage. It takes design that expects disorder—and knows how to organize it. Data pipelines help bridge this complexity. They link together systems like EHRs, insurance claims, wearables, and diagnostic tools—so that the information can move securely and consistently. Standards like HL7 and FHIR help make these handoffs work, even across aging platforms. As the data moves, it’s shaped into something usable. Behind the scenes, it’s cleaned, structured, and enriched before reaching analytics teams or clinical systems. The work happens in moments, but its impact is lasting. ... Discharge no longer means disconnection. For patients managing chronic conditions, remote care programs have changed what happens after they leave the hospital. One such initiative pulled continuous data from wearables, implants, and diagnostic devices into a secure cloud system. Care teams could monitor trends, identify risks early, and step in before issues got worse. In patients with chronic conditions, timely support made a measurable difference. Readmissions dropped by almost 40%. Simple check-ins and reminders helped people stay on course—not through pressure, but with steady, well-timed guidance. At scale, the results were even clearer. For every 10,000 patients, the program saved more than USD 1 million a year. 


Micro-Frontends: A Sociotechnical Journey Toward a Modern Frontend Architecture

As organisations demand faster delivery, greater autonomy, and continuous modernisation, our frontend architectures must evolve in step with our teams. The distributed frontend era is here, but it’s not defined by new frameworks or fancy tooling. It’s defined by the way we align people, processes, and architecture around a shared goal: delivering value faster without losing control. ... Micro-frontends are often introduced as a technical pattern - a way to break a large frontend into smaller, independently deployable pieces. But that framing misses the point. Micro-frontends are not a new stack; they are a new way of structuring work. They represent a sociotechnical shift - one that mirrors Conway’s Law, which tells us that system design reflects communication structures. When teams are forced to coordinate through a single release train, decision-making slows. When every change requires syncing across multiple domains, creativity fades. The result is not just technical debt but organisational inertia. Micro-frontends reverse that dynamic. They allow teams to own slices of the product end-to-end - domain, design, delivery - without waiting for centralised approval. ... But micro-frontends are not a silver bullet. For small teams or products with limited complexity, the overhead might outweigh the benefits. The goal is not to adopt a pattern for its own sake but to solve concrete problems: delivery bottlenecks, scaling limits, and the inability to modernise safely.


Software Testing in the AI Era - Evolving Beyond the Pyramid

The past few years have seen a radical departure from the previous approach with the shift to LLM-based tools. Ideally, each approach to automation should not only meet code coverage goals, but also integrate seamlessly with industrial-scale continuous deployment workflows as a matter of practical purposes. The latter wasn’t really the case until AI came along. ... Despite the underlying strategy, search algorithms contain a key component - the “fitness function,” i.e., the goal criteria used to guide the algorithm towards better solutions. Code coverage, though simplistic, is an often-used metric to gauge how good a software testing suite is, and is therefore a commonly used fitness function when generating tests using search algorithmic approaches. In practical applications of this technique, several open source tools have been developed, with EvoSuite being a popular option using a genetic-algorithm approach to generate unit tests for Java code. ... Test generation can be considered a subfield under LLMSE, with the key components of an LLM-based test generation strategy including inputs such as the code under test, prompt generators, test validation, and prompt refiners to tune and refine the generated tests in a feedback loop. Compared to search-based strategies, this technique is still in its infancy but has gained traction since tests generated using prompt refining on predictive AI output human-readable tests requiring little post-processing.


The rise (and fall?) of shadow AI

“The security surface extends far beyond traditional concerns. For AI systems, the model and data become the primary attack vectors,” said Meerah Rajavel, chief information officer at Palo Alto Networks, on the company’s own blog. “While frontier models from providers like Google and OpenAI carry lower risk due to extensive testing, most AI applications incorporate multiple specialised models.” ... “Organisations must scan models for vulnerabilities, manage permissions appropriately and protect data access. Runtime security becomes critical because prompts function like code and the LLM acts as an operating system. That has to be protected like a software supply chain,” said Rajavel. ... Shadow AI detection and control is a growing marketplace. Other vendors that operate here include Netskope with its Netskope One platform, which includes AI security capabilities to detect shadow AI usage. Not exactly a like-for-like competitor but still in the same core operational arena, the SaaS management toolset from Zylo is built to discover and manage all their SaaS applications, including unauthorised AI tools, by centralising data, risk scores and usage. “To address the risk [of shadow AI], CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran at magical analyst house Gartner.

Daily Tech Digest - March 15, 2021

How to scale AI with a high degree of customization

Scaling machine learning programs is very different to scaling traditional software because they have to be adapted to fit any new problem you approach. As the data you’re using changes (whether because you’re attacking a new problem or simply because time has passed), you will likely need to build and train new models. This takes human input and supervision. The degree of supervision varies, and that is critical to understanding the scalability challenge. A second issue is that the humans involved in training the machine learning model and interpreting the output require domain-specific knowledge that may be unique. So someone who trained a successful model for one business unit of your company can’t necessarily do the same for a different business unit where they lack domain knowledge. Moreover, the way an ML system needs to be integrated into the workflow in one business unit could be very different from how it needs to be integrated in another, so you can’t simply replicate a successful ML deployment elsewhere. Finally, an AI system’s alignment to business objectives may be specific to the group developing it. For example, consider an AI system designed to predict customer churn.


The snags holding back DevOps: culture, delivery and security

Cultural issues create this disjointed relationship between Dev and Ops. "Culture is the number one missing component, but there is also a failure to truly connect and automate across functional silos," Dawson says. "This results in lack of shared visibility, consistent feedback to drive improvement and, potentially, a negative experience which inhibits adoption." There are too many tools competing for Dev and Ops teams' mindshare as well. "A single team may have anywhere between 20 to 50 tools," says Kakran. "Separating signal-from-noise when you are bombarded by hundreds of alerts per hour is quite challenging." The continuous delivery piece is also a snag in the continuous integration / continuous delivery (CI/CD) that should flow effortless through DevOps. "Enterprises are lagging in test automation and are increasing efforts to automate continuous testing, which is a core component of CD," says Venky Chennapragada, DevOps architect with Capgemini North America.. "Some enterprises are unable to adopt a high level of CI/CD because their application portfolio mostly consists of packaged software, legacy software or ERP systems."


PyTorch Code for Self-Attention Computer Vision

Self-Attention is gradually gaining prominent place from sequence modeling in natural language processing to Medical Image Segmentation. It replaces conventional recurrent neural networks and convolutional neural networks in many applications to achieve new state-of-the-art in respective fields. Transformers, its variants and extensions are well-utilizing self-attention mechanisms. Self-Attention Computer Vision, known technically as self_attention_cv, is a PyTorch based library providing a one-stop solution for all of the self-attention based requirements. It includes varieties of self-attention based layers and pre-trained models that can be simply employed in any custom architecture. Rather than building the self-attention layers or blocks from scratch, this library helps its users perform model building in no-time. On the other hand, the pre-trained heavy models such as TransUNet, ViT can be incorporated into custom models and can finish training in minimal time even in a CPU environment! According to its contributors Adaloglou Nicolas and Sergios Karagiannakos, the library is still under development by updating the latest models and architectures.


How to Choose the Right Cybersecurity Framework

Start by setting goals for your cybersecurity program that align with the business's needs. Stakeholders from across the organization — from the C-suite and upper management to support teams and IT — should be involved in the initial risk-assessment process and setting a risk-tolerance level. While deciding where to start your implementation can feel like trying to boil the ocean, one way to make it less intimidating is to run a pilot program focused on a single department. This can help uncover lessons about what does and doesn't work, what tools will help you succeed, and best practices for a wider rollout. From there, identify the type of data the organization processes and map out its life cycle. A simple model will help lay a foundation for understanding the organization's cybersecurity risk and identify points along the supply chain to invest more time and resources. Business tools and software are often important sources and collectors of data, so ask vendors about their data privacy policies to ensure they reflect your goals. ... A good cybersecurity framework will help you identify risks, protect company assets (including customer data), and put steps in place to detect, respond, and recover from a cybersecurity event.


Is Data Science a science?

Before we tackle the idea of whether Data Science is a science or not, something that doesn’t seem to have a definitive answer, let’s step back and look at the idea of proof. This is a word that is overused quite frequently as there are many different kinds of proof: for example, there are scientific proofs, legal proofs, and mathematical proofs. In mathematics, a proof is an inferential argument that shows a statement is true as supported by axioms, definitions, theorems, and postulates. Mathematicians normally use deductive reasoning to show that the premises, also called statements, in a proof are true. A direct proof is one that shows a given statement is always true and the proof is usually written in a symbolic language. In an indirect proof, mathematicians usually employ proof by contradiction, where they assume the opposite statement is true and eventually reach a contradiction showing the assumption is false. In science, an inherently inductive enterprise,² we cannot prove any hypothesis to be true as that would require an infinite number of observations so the best we can hope to do is use inductive reasoning as the basis of our generalization and hold it to be provisionally true.


Seven lessons on how technology transformations can deliver value

Not only do the transformations focused on talent strategy stand out in their value potential, but they are also much more commonplace at top-performing companies. Top-quartile respondents are more than three times likelier than their bottom-quartile peers (41 percent, compared with 12 percent) to say they’ve pursued a transformation of their talent strategy in recent years. Yet the need to address talent is universal and urgent. Respondents believe that more than 40 percent of their workforce will need to be either replaced or fundamentally retrained to make up for their organizations’ skills gaps. But only 15 percent of respondents say their companies plan to pursue a talent-strategy transformation in the next two years, even though the talent challenge remains considerable. At companies that have pursued recent transformations, the top challenges to doing so continue to revolve around talent as well as culture: namely, skill gaps and cultural differences, the difficulty of changing cultures and ways of working, and difficulty finding talent to fill new roles—which is as challenging for top performers as it is for everyone else. Talent also appears to impede progress at the companies that haven’t pursued technology transformations;


More Intelligent Medicine

The combination of human and machine intelligence could optimize the practice of clinical medicine and streamline health care operations. Machine learning-based AI tools could be especially valuable because they rely on adaptive learning. This means that with each exposure to new data, the algorithm gets better at detecting telltale patterns. Such tools have the capacity to transcend the knowledge-absorption and information-retention limits of the human brain because they can be “trained” to consider millions of medical records and billions of data points. Such tools could boost individual physicians’ decision-making by offering doctors accumulated knowledge from billions of medical decisions, billions of patient cases, and billions of outcomes to inform the diagnosis and treatment of an individual patient. AI-based tools could alert clinicians to a suboptimal medication choice, or they could triage patient cases with rare, confounding symptoms to rare-disease experts for remote consults. AI can help optimize both diagnostic and prognostic clinical decisions, it can help individualize treatment and it can identify patients at high risk for progressing to serious disease or for developing a condition, allowing physicians to intervene preemptively.


Surviving Zombie Scrum

There’s not one specific cause of Zombie Scrum, but in relation to the symptoms we described earlier, we can share some common causes. Generally speaking, Zombie Scrum systems occur in organizations that optimize for something else than actual agility. This creates problems that the teams can usually not solve on their own. For example, Scrum Teams that operate in environments with Zombie Scrum rarely have a clear answer as to what makes their product valuable. Much like zombies that stumble around without a sense of direction, many Zombie Scrum Teams work hard on getting nowhere in particular. While they still produce something the question remains whether they are actually effective. ... Another cause is the struggle many organizations face with shipping fast. Often heard excuses are that the product is too complex, technology doesn’t support it, or customers aren’t asking for it. Shipping fast is perceived as a “nice to have”, instead of a necessary activity to manage risk and deliver value sooner. Without shipping fast, Scrum’s loop of Empirical Process Control collapses. In Zombie Scrum, organizations don’t create safety to fail. Teams can’t improve when they experience no room for uncertainty, doubt, or criticism. They often develop all kinds of defensive strategies to prevent uncertainty.


AI Must Play a Role in Data Cloud Management

Data intelligence, or the use of data to glean useful information, allows a business to both increase revenue and their position in the market. But the continual multiplication of data and its sources are making an already substantial challenge even more laborious. This emphasis on data is where artificial intelligence (AI) can play an especially useful role. By leveraging the cloud and AI for the storage, collection, and analysis of data, a business can monetize information in a fast, effective manner. Indeed, mastering data management through the use of the cloud will continue to be top of mind for many IT groups as they are asked more and more to improve business agility through the fostering of better business intelligence. Thus, data science -- the large umbrella under which AI, machine learning, automation, data storage, and more all fall within -- will see huge leaps in growth both this year and in the years ahead. The cloud is perfectly positioned to assist organizations in AI because of its unique ability to provide business with flexibility, agility, scalability, and speed that other models of infrastructure simply can’t achieve at the same level. If the core of a business isn’t managing a datacenter, then the cloud is all the more appealing, since it allows IT teams to focus on the value-driving projects that will truly make a difference for employees and customers.


Why data privacy will be the catalyst for digital identity adoption

Firstly, the story around digital identities needs to change. What they won’t be is a one-stop-shop to access every piece of personal information about you at the touch of a button, shareable and stealable. What digital identities could be, if we put data privacy at their core, is selective. We have the opportunity to create a technology, which means people only need to share the specific data they need at any one time, withholding as much data as they can to get the job done. This doesn’t seem too big of an ask, either. Mastercard recently partnered with Deakin University and Australia Post to test out a digital ID solution enabling students to register for their exams digitally. This removed the need for tiresome paperwork and trips to campus, but also reduced the amount of data shared about each student. Students created a digital identity with Australia Post, using this to gain access to their university exam portal. With each registration, only specific personal information was required to allow students’ entry to the exam portal – nothing was shared than didn’t need to be. Now imagine this in our banks, shops, and workplaces. Rather than revealing most of your ‘identity’ with every purchase of alcohol, you only show your ID documents when you first create the identity – to verify that you are who you say you are.



Quote for the day:

"Don't dare to be different, dare to be yourself - if that doesn't make you different then something is wrong." -- Laura Baker

Daily Tech Digest - December 28, 2020

6 habits of successful IT leaders in 2021

“One of the many things we have learned from this crisis is how much improvement many of us need as IT leaders. Getting into the habit of working on developing our emotional intelligence daily will make us better leaders. This is often pointed out in others. However, we need to examine ourselves and find better ways to deal with the many emotions that arise from our current circumstances. IT leaders need to examine their own level of empathy as they manage folks they may no longer be able to walk over to and have a conversation with as you please. As we lead during this time of flexible schedules and distributed workforce, focus on developing more empathy and, honestly, just a bit more grace.”  “Be vulnerable and provide an atmosphere that will allow your team to feel supported to still do their best work even in this difficult time. Do not be that leader with a team that looks to get as far away from you following this crisis, or the leader whose team members throw in the towel before this crisis ends just to maintain their sanity.” – Cedric Wells, Director, IT Infrastructure Services, The Gorilla Glue Company ... Meditation is a powerful habit that can unlock this superpower. Many top business leaders like Ray Dalio, bestselling authors like Yuval Harari owe all their success to meditation.


SolarWinds Attack Gives Rise to New Runtime Security Models

A critical observation to make about this attack is that even though the attackers already had a digitally signed backdoor, they still needed to bring additional malicious code into the environment. The backdoor was a pretty big chunk of code and contained several C2 (command and control) functions compiled as part of the legitimate product. And yet, even this unusually big backdoor had no means to spread and perform sophisticated injection and theft scenarios. It required a post-deployment file-less malware (FireEye called it TEARDROP). It is thought that TEARDROP deployed a version of the Cobalt Strike BEACON payload, a penetration testing tool made for red teams that can also be used by attackers. This fact is critical since it is true to almost any attack and most of other backdoor cases. They look like tiny innocent coding oversights – basically, like any other vulnerabilities created as an honest mistake. From this point on, intentional backdoors and incidental vulnerabilities are used in very similar ways. Both are utilized to bring real malicious code – the exploit – into the target environment and perform the actual attack.


2021 will be the year open source projects overcome their diversity problems

In October 2020, the Linux Foundation announced a new Software Developer Diversity and Inclusion project to draw on science and research to deliver resources and best practices that increase diversity and inclusion in software engineering. Following the age-old tenet that “you cannot manage what you don’t measure”, the Hyperledger Diversity, Civility, and Inclusion (DCI) Working Group is focused on “measuring and improving the health of our open source community.”  In the OpenJS community, the Node+JS diversity scholarship program provides support to those from traditionally underrepresented or marginalized groups in the technology or open source communities who may not otherwise have the opportunity to attend the event for financial reasons. At KubeCon + CloudNativeCon this year, The Cloud Native Computing Foundation announced The Inclusive Naming Initiative to help remove harmful, racist, and unclear language in software development. At IBM, we had a similar program underway, and we have joined the CNCF initiative to further the cause. ... The AI Inclusive initiative seeks to increase the representation and participation of gender minority groups in AI. They offer offers events, tutorials, workshops, and discussions to guide community members in their AI careers.


Homomorphic Encryption: The 'Golden Age' of Cryptography

The origins of homomorphic encryption date back to 1978. That's when a trio of researchers at MIT developed a framework that could compute a single mathematical operation (usually addition or multiplication) under the cover of encryption. The concept gained life in 2009, when Craig Gentry, now a research fellow at the blockchain-focused Algorand Foundation, developed the first fully homomorphic encryption scheme for his doctoral dissertation at Stanford University in 2009. Gentry's initial proof was simply a starting point. Over the past decade, security concerns related to cloud computing, the Internet of Things (IoT), and the growing demand for shared and third-party data have all pushed the concept forward. Along the way, more powerful homomorphic algorithms have emerged. Today, the likes of IBM and Microsoft have entered the space, along with the US Defense Advanced Research Projects Agency (DARPA) and an array of startups. "There is a tremendous benefit to being able to perform computations directly on encrypted data," says Josh Benaloh, senior cryptographer at Microsoft Research. "This allows computations to be outsourced without risk of exposing the data."


How to securely hash and store passwords in your next application

A "salt" is a random piece of data that is often added to the data you want to hash before you actually hash it. Adding a salt to your data before hashing it will make the output of the hash function different than it would be if you had only hashed the data. When a user sets their password (often on signing up), a random salt should be generated and used to compute the password hash. The salt should then be stored with the password hash. When the user tries to log in, combine the salt with the supplied password, hash the combination of the two, and compare it to the hash in the database. Without going into too much detail, hackers commonly use rainbow table attacks, dictionary attacks, and brute-force attacks to try and crack password hashes. While hackers can't compute the original password given only a hash, they can take a long list of possible passwords and compute hashes for them to try and match them with the passwords in the database. This is effectively how these types of attacks work, although each of the above works somewhat differently. A salt makes it much more difficult for hackers to perform these types of attacks. Depending on the hash function, salted hashes take nearly exponentially more time to crack than unsalted ones. 


SaaS security in 2021

It’s clear to IT leaders that unvetted SaaS solutions (shadow IT) pose a variety of risks, including exposure of sensitive information, data ownership issues and regulatory compliance problems. The question is who is best suited to mitigate those risks, and in 2021, more companies will find that it takes a multidisciplinary strategy. A proactive governance approach requires a defined process involving a multidisciplinary team that ensures visibility and directly addresses risks to keep exposure within acceptable levels. Companies have to classify data in terms of integrity, confidentiality and availability to find the ideal balance between security and costs and determine acceptable risk levels. Cloud providers share responsibility to keep data secure along with the company, so it’s important to define exactly who is responsible for what. Companies typically manage user access, endpoint devices and data while SaaS vendors oversee apps, virtual machines, databases, etc. To fulfill their governance objectives, IT leaders will look for SaaS providers that offer multiple configuration options, including password settings/identity federations and authorization models, as well as availability plans to meet goals related to recovery time and recovery points.


Top five telecoms trends for 2021

5G has had many false starts, but 2021 could be the year when it really starts to take a predominant role in the telecoms space. With so many people now working remotely and using collaboration and messaging tools or video calls to communicate, we’ve started to see the demise of the traditional phone call. 5G is the ideal solution to replace landlines, using a SIM card as a fixed wireless access (FWA) to a cell tower, rather than having to install fibre cables physically into streets and homes. Investing in 5G infrastructure to give more workers around the country access to high quality, superfast connectivity is looking more and more like a political imperative to keep as much of the economy as possible working and productive. If it’s in the national interest, we might even see government support being provided to networks to deliver widespread 5G… or so networks will be hoping. ... Working from home has been a dominant theme of the coronavirus pandemic. Even if vaccination programmes soon return life to “normal” next year, some workplaces may not reopen their doors, on the basis that there is no longer a compelling commercial case to maintain a physical presence. All the necessary infrastructure businesses need to function, including telecoms, can be hosted in the public cloud. Remote connectivity is all they need.


The future of work is happening now thanks to Digital Workplace Services

It’s absolutely true that the pandemic elevated digital workplace technology from being a nice-to-have, or a luxury, to being an absolute must-have. We realized after the pandemic struck that public sector, education, and more parts of everyday work needed new and secure ways of working remotely. And it had to become instantaneously available for everyone. You had every C-level executive across every industry in the United States shifting to the remote model within two weeks to 30 days, and it was also needed globally. Who better than Dell on laptops and these other endpoint devices to partner with Unisys globally to securely deliver digital workspaces to our joint customers? Unisys provided the security capabilities and wrapped those services around the delivery, whereas we at Dell have the end-user devices. ... One of the big challenges in a merger or acquisition is how to quickly get the acquired employees working as first-class citizens as quickly as possible. That’s always been difficult. You either give them two laptops, or two desktops, and say, “Here’s how you do the work in the new company, and here’s where you do the work in the old company.” 


7 predictions for what lies ahead for health tech in 2021

The industry has heard about advances in AI for years, but in 2021, healthcare will start to see the benefits of machine learning in solutions that are highly scalable, predicts Tom Knight, CEO of Invistics. New technology funded by the National Institutes of Health (NIH), for example, can detect and fix many problems with medication administration, while helping to raise hospital revenues by millions of dollars annually, Knight said. Providers are increasingly accepting AI’s role in medicine and the capability to identify sequences and trends in data that humans cannot,” said Yann Fleureau, co-founder and CEO of Cardiologs. Kimberly Powell, vice president and general manager at NVIDIA Healthcare, predicts that hospitals will get “smarter.” Similar to the experience at home, smart speakers and smart cameras will help automate and inform activities. The technology, when used in hospitals, will help scale the work of nurses on the front lines, increase operational efficiency and provide virtual patient monitoring to predict and prevent adverse patient events, said Powell. ... John Matthews, managing director of healthcare and life sciences at Teradata, said the smart money is on leaders that recognize the difference between solutions that solve problems and trends that attract mob mentality and next-silver-bulletism.


Remote work: 10 ways to upgrade your working from home setup

Cybersecurity is more important than ever for this newly distributed and heterogeneously equipped workforce, for whom commuting is a fading memory (along with real-world interaction with colleagues and clients). Although there are obvious downsides to remote working, including work/life balance and long-term mental health, many of us are likely to continue working from home on a regular basis after the pandemic. That being so, it's obviously a good idea to have the best equipment for the job: there's a big difference between spending a couple of hours on your laptop at the kitchen table outside normal working hours and making this arrangement your primary workspace. To get an idea of the kind of setups that knowledge workers should be looking at in 2021 and beyond, it's worth examining the contents of ZDNet contributors' home offices, as featured on this site over recent weeks. These are journalists who have been working from home for years, and who are also, by definition, up-to-speed with the latest technology. This means that their gear is mostly at the power-user end of the knowledge worker spectrum, giving a good indication of what may become standard fare in the 'new normal'.



Quote for the day:

“People rarely succeed unless they have fun in what they are doing.” -- Dale Carnegie

Daily Tech Digest - August 24, 2020

What’s New In Gartner’s Hype Cycle For Emerging Technologies, 2020

Gartner believes that Composite AI will be an enabling technology for organizations that don’t have access to large historical data sets or have AI expertise in-house to complete complex analyses. Second, Gartner believes that Composite AI will help expand the scope and quality of AI applications. Early leaders in this area include ACTICO, Beyond Limits, BlackSwan Technologies, Cognite, Exponential AI, FICO, IBM, Indico, Petuum and ReactiveCore. ... The goal of Responsible AI is to streamline how organizations put responsible practices in place to ensure positive AI development and use. One of the most urgent use cases of Response AI is identifying and stopping “deep fakes” production globally. Gartner defines the category with use cases that involve improving business and societal value, reducing risk, increasing trust and transparency and reducing bias mitigation with AI. Of the new AI-based additions to the Hype Cycle this year, this is one that leads all others on its potential to use AI for good. Gartner believes responsible AI also needs to increase the explainability, accountability, safety, privacy and regulatory compliance of organizations as well.


How to ensure CIO and CMO alignment when making technology investment decisions

Often, total cost of ownership (TCO) for handling the complexity, maintenance and technical debt with new platforms can turn out to be a real burden for organisations. In fact, according to Gartner, more than three-quarters of orgnaisations found the technology buying process complex or difficult. But is this really surprising? Implementing the right technology solution for the business is often challenging due to the different priorities that CMOs and CIOs have. While for the CMO the priority is to adopt the latest innovations as soon as possible in order to stay ahead of the competition, this need has to fit the CIO’s focus on TCO for the long-term. The weight of these options is what drives a wedge between those key decision makers, creating a need to find common ground sooner. Being aligned is essential so that they can choose the right options which will allow marketing to execute on strategy and hit company targets on the one hand, and meet operational requirements for maintenance, governance and risk avoidance on the other, which are top of mind for the CIO. To ensure that the best options are selected for the business, the CMO’s priorities need to meet those of the CIO and vice versa.


Save-to-transform as a catalyst for embracing digital disruption

In this approach, businesses evolve through infrastructure investments in digital technologies. In turn, these technologies can deliver dramatic improvements in competitiveness, performance and operating efficiency. In response to the pandemic, the survey shows that organizations are evolving into a “Save-to-Thrive” mindset, in which they are accelerating strategic transformation actions specifically in response to challenges posed by COVID-19 to make shifts to their operating models, products and services and customer engagement capabilities. “The Save-to-Thrive framework will be essential to success in the next normal as companies rely on technology and digital enablement — with a renewed emphasis on talent — to improve their plans for strategic cost transformation and overall enterprise performance improvement,” said Omar Aguilar, principal and global strategic cost transformation leader, Deloitte Consulting. “Companies that react quickly and invest in technology and digital capabilities as they pursue the strategic levers of cost, growth, liquidity and talent will be best-positioned to succeed.”


How big data is solving future health challenges

Unlike many other data warehousing projects, Stringer said the focus is not just on collecting and using data if it has a specific quality level. Instead, when data is added to LifeCourse, its quality level is noted so researchers can decide for themselves if the data should or should not be used in their research. The GenV initiative relies on different technologies, but the two core pieces are the Informatica big data management platform and Zetaris. Informatica is used where traditional extract, transform and load (ETL) processes are needed because of its strong focus on usability. Stringer said this criterion was heavily weighted in the product selection process. Usability, he said, is a strong analogue for productivity. But with a dependence on external data sources and a need to integrate more data sources over the coming decades, Stringer said there needed to be a way to use new datasets wherever they resided. That was why Zetaris was chosen. Rather than rely on ETL processes, Stringer said the Zetaris platform lets GenV integrate data from sources where ETL is not viable.  


5 Key Capabilities of a Next-Gen Enterprise Architecture

Many enterprise architects look to rationalize and centralize emerging technologies, processes, and best practices, making them available to all business units in a self-service mode to accelerate digital transformation and modernization initiatives across the enterprise. By defining enterprise-wide technology standards and tools, enterprise architects strive to plan for reusability, reducing costs and future proofing the architecture as technology changes and enforcing data governance and privacy policies to democratize data so that trusted data travels securely throughout the enterprise in a frictionless, self-serve fashion. Traditional data management solutions to support next-gen architectures are expensive, manual, and require time-consuming processes, while newer emerging niche vendor solutions are fragmented. As such, they require extensive integration to stitch together end-to-end workstreams, requiring data consumers to wait months to get useful data. Therefore, a next-gen enterprise architecture must support the entire data pipeline, which includes the ability to ingest, stream, integrate, and cleanse data. 


India's National Digital Health Mission: A New Model To Enhance Health Outcomes

The digital health platform that NDHM is, is guided by an architectural blueprint called the National Digital Health Blueprint (NDHB), developed a few months earlier. The NDHB has put in place a structure to the thinking and approach. It established the vision and principles, architecture requirements and specifications, applicable standards and regulations, high-priority services, and institutional mechanisms needed to realize the mission of digital health. The NDHB is crafted to unlock enormous benefits for citizens, create new opportunities and financial, productivity, and transparency gains and make a positive contribution to growth, innovation, and knowledge sharing. A digital platform with a national footprint evokes immediate pushback as it is generally seen to steer the narrative towards centralization. The architecture deliberately and explicitly addresses this ‘concern’ to ensure that India’s overall federated structure of governance is reflected in the architecture as well. In a large country like India, where there are multiple layers of government – national (central), state, local (urban), and local (rural) – the responsibilities are distributed and this is guaranteed by the constitution.


Data Governance Should Not Threaten Work Culture

The discipline of data governance must focus on knowing who these people are, helping them to make more actionable decisions, and empowering them to become better stewards. People who define data must know what it means to define data better, and that includes providing meaningful business definitions for data and managing how often data is replicated across the organization. People who produce the data must know what quality data looks like, and they must be evaluated on the quality of the data they produce. And, the no-brainer. People in the organization who use the data, must understand how to use it, and follow the rules associated with using it appropriately. That means data consumers must follow the protection and privacy rules, the business rules, and use the data in the ethical manner spelled out by the organization. While people already define, produce, and use data, data governance requires that these people consistently follow the rules and standards for the action they take with that data. The rules and the standards are important metadata, data about the data, that must be recorded and made available to the people across the organization to assist in the discipline of data governance.


Defining a Data Governor

Without oversight, employees will misinterpret data, sensitive data may be shared inappropriately, employees will lack access to necessary data, and employees’ analysis will often be incorrect. A Data Governor will maintain and improve the quality of data and ensure your company is compliant with any regulations. It is a vital role to have for any informed company. With the exploding volume of data within companies, it has become extremely difficult for a small technical team to govern an entire organization’s data. As this trend continues, these Data Scientists and Analysts should transition themselves from their traditional reporting responsibilities to those of Data Governors. In a traditional reporting role, their day was filled with answering questions for various business groups around their needed metrics. The shift to Data Governors finds them instead creating cleaned, documented data products for those end business groups to explore themselves. This is called Democratized Data Governance, where the technical team (traditionally data gatekeepers) handles the technical aspects of governance and share the responsibilities of analytics with the end business groups.


Blockchain for Applications ~ A Multi-Industry Solution

The workings of blockchain are somewhat common knowledge now. A decentralized network of interconnected links that share all data among its peers, keeping a chronological log of each transaction. Simply put- “Everything that happens in the blockchain network is shared by all members of the network and everyone has a record of it on their individual device” Hence, in a way these block-chains form a binding link with each other and through this decentralized model of information storage, it liberates from the risk & inefficiencies of having all data stored in one place only. ... DApps or decentralized applications function without any central server to help interact with two parties. Blockchain users operate on mini-servers that work simultaneously to verify and exchange data. There are 2 kinds of blockchains, segregated on the basis of access and permissions – “Permissionless blockchain” & “permissioned blockchain”. A permissionless network grants full transparency and allows each member to verify transaction details, interact with others while staying completely anonymous. Bitcoin works on a permissionless blockchain.


How to manage your edge infrastructure and devices

Another aspect to consider when managing edge infrastructure and devices is to invest in discovery processes. “Edge by nature creates a distributed approach – accelerated by the current global pandemic – that needs a more flexible style of management,” said David Shepherd, area vice-president, pre-sales EMEA at Ivanti. “But ultimately, if we don’t know what we are managing then it becomes difficult to even start managing in a comprehensive manner. “Effective discovery processes allow an organisation to apply the right management policies at the right time. As more devices start to appear at the edge, the context of the device plays a crucial role. “This includes the type of device and the interaction it has with the infrastructure, plus its location (often remote). Understanding what a device is and how it interacts is again crucial to applying a comprehensive management approach. ... “Zero-touch provisioning, for example, enables easier onboarding of IoT devices onto an IoT cloud platform, e.g. AWS, as it enables automatic provisioning and configuration. This prevents developer error during the provisioning and configuration process, as well as provide a more secure interaction between the device and platform as the security framework had already been established on both ends during the pre-production stage.



Quote for the day:

"The hard part isn't making the decision. It's living with it." -- Jonas Cantrell

Daily Tech Digest - April 22, 2020

Cisco integrates SD-WAN connectivity with Google Cloud

sd-wan
The Cisco/Google platform is important because software- and infrastructure-as-a-service (SaaS and IaaS) offerings have been driving SD-WAN implementations in the past year, experts say. “One of the key drivers of SD-WAN has been the increasing consumption of cloud services in the enterprise, across both IaaS and SaaS applications,” said Rohit Mehra, vice president, network infrastructure at IDC. “With some of the largest public cloud providers playing an increasing role in how these enterprise apps are consumed and delivered, and bringing their vast global networks to bear, they will increasingly have a role to play with how WANs are architected going forward.” For enterprises, one of the key takeaways from this announcement is that “SD-WANs will now be able to play a better functional role in the delivery of cloud services such as IaaS and SaaS, and likewise, the large public-cloud purveyors will benefit from providing a stronger value proposition towards multi-cloud deployments,” Mehra said. "Secondly, enterprises will benefit in terms of extending policy and governance beyond applications to other attributes such as locations/geo and multiple clouds.”



The new normal: A step-by-step guide for the enterprise

The new normal: A step-by-step guide for the enterprise
From a business perspective, we need to identify and understand the negative effects that occurred during the lockdown. What additional damage will likely occur in the short and long terms? This can range from relatively minor problems, such as a slowdown of some customer deliveries or lack of materials for manufacturing, to a complete shutdown of some operations due to on-premises systems that could not be maintained or fixed during the lockdown. You need to assign dollar amounts to each issue. Keep in mind that some of these will be hard costs, meaning sales and billing. Others will be soft costs, such as reputation. What points hurt the business the most? We need this information to prioritize triage. For most enterprises, this step will immediately identify the need to migrate some assets to cloud. The migration will typically target existing on-premises systems that managed to limp through the crisis. Based on historical migration data, the most common move will involve a “lift and shift” of resources, such as storage and compute, to a public cloud provider. Most enterprises will opt to refactor the applications at a later date; a few will refactor as the applications migrate.


Here are six tech roles companies want to fill now, despite the coronavirus lockdown


"The fact that recruitment is still continuing with relative strength in IT is perhaps unsurprising due to the on-going need across most sectors to conduct operations remotely," said Ann Swain, CEO of APSCo. John Gaughan, managing director of technology recruitment firm Finlay James, said he has a number of clients who are hiring and using remote on-boarding when filling SaaS tech sales roles and technology leadership positions. Recruiters are switching from in-person interview to video meetings with candidates, and in some cases, with everyone working from home, it may be some time before new recruits actually meet the people they are working with. The APSCo report also noted that recruitment for marketing has also held up surprisingly well, which it said is probably down to businesses ramping up their digital marketing and communications activities. There has also been an increase in roles involving employee engagement. "With many teams now working from home, the challenge of keeping remote employees engaged and operating as a cohesive unit has never been greater," the report said.


Contactless Payments: Healthy COVID-19 Defense


From a fraud-fighting standpoint, compared with swiping a card and signing a paper receipt, contactless is much more secure. And while some call these capabilities "tap and go," in reality, there's no contact required: You just have to wave your card or compatible smartphone close to the card reader until it beeps. Cards with this capability began to be rolled out in the U.K. in 2008, and the vast majority of payment terminals in stores now work with them. Other systems that don't get refreshed very often - for example, inside buses - have been slowly catching up. Here in the Scottish city of Dundee, last year most buses finally got upgraded with the ability to accept contactless payments. Many newer smartphones also have contactless capability via Apple Pay, Android Pay or Samsung Pay. Just load a payment card and use your smartphone to pay without touching anything, up to certain amounts. As a bonus, the smartphone-based approaches add additional layers of security, such as needing to use your fingerprint or face to unlock the contactless payment capability.


Remote Agile (Part 4): Anti-Patterns

remote agile anti-patterns
Hybrid events create two classes of teammates — remote and co-located — where the co-located folks are calling the shots. Beware of the distance bias — when out of sight means out of mind — thus avoiding the creation of a privileged subclass of teammates: “Distance biases have become all too common in today’s globalized world. They emerge in meetings when folks in the room fail to gather input from their remote colleagues, who may be dialing in on a conference line.” To avoid this scenario, make sure that once a single participant joins remotely, all other participants “dial in,” too, to level the playing field. Every communication feels like a (formal) meeting. ... Instead, put trust in people, uphold the prime directive, and be surprised what capable, self-organizing people can achieve once you get out of their way. Trust won’t be built by surveilling and micro-managing team members. Therefore, don’t go rogue; the prime directive rules more than ever in a remote agile setup. Trust in people and do not spy on them — no matter how tempting it might be. Read more about the damaging effect of a downward spiraling trust dynamic from Esther Derby.


COVID-19 & The Digital Imperative


In a recent interview, John Chambers, former Cisco CEO and now Venture Capitalist, said the pandemic will force many “companies to use this moment to make the transition to digital. Things will get worse before they get better— that is the realistic optimist in me speaking,” said Chambers, who has predicted up to 40% of the Fortune 500 and 70% of startups will no longer be around in a decade if they don’t make the digital transition. The disruptions brought about by the pandemic can be expected to accelerate the shift to digital that has already been underway. It is not just that organizations the world over have radically altered their work environments to accommodate work from home and technologies such as video conferencing and remote networking on a massive scale. It is also that the consequences of the pandemic are likely creating digital disruption opportunities and imperatives across the economy, in industries as diverse as food and beverage, hospitality, real estate, travel, and government.


How microsegmentation architectures differ

micro segmentation security lock 2400x1600
It's important to remember that microsegmentation is not just a data center-oriented technology. "Many security incidents start on end-user workstations, because employees click on phishing links or their systems become compromised by other means," Cross says. From that initial point of infection, attackers can spread throughout an organization's network. "A microsegmentation platform should be able to enforce policies in the data center, on cloud workloads, and on end-user workstations from a single console," he explains. "It should also be able to stop attacks from spreading in any of these environments." As with many emerging technologies, vendors are approaching microsegmentation from various directions. Three traditional microsegmentation types are host-agent segmentation, hypervisor segmentation and network segmentation. ... This microsegmentation type relies on agents positioned in the endpoints. All data flows are visible and relayed to a central manager, an approach that can help reduce the pain of discovering challenging protocols or encrypted traffic.


Google wants to make it easier to analyse health data in the cloud


Dr John Halamka, president of Mayo Clinic Platform, said: "We're in a time where technology needs to work fast, securely, and most importantly in a way that furthers our dedication to our patients. Google Cloud's Healthcare API accelerates data liquidity among stakeholders, and in-return, will help us better serve our patients." The issue of interoperability remains a tricky subject within healthcare. Battles over data formats and ownership stymies efforts to join up healthcare systems and make patient data available to healthcare professionals whenever and wherever they need it. In the US, inroads have been made recently through the passing of rules by Centers for Medicare and Medicaid Services (CMS) and the National Coordinator for Health Information Technology (ONC) to make it easier for healthcare organisations to exchange patient data, and for patients to access their own information. Google said its Cloud Healthcare API was designed to scale and support interoperability and patient access. It added that the COVID-19 pandemic had made the need for increased data interoperability more important than ever.


How developer teams went remote overnight

How developer teams went remote overnight
Remote work isn’t new for communications API specialist Twilio, but the pandemic has caused a massive shift. Prior to the coronavirus outbreak, CEO Geoff Lawson told TechCrunch that around 10 percent of the company worked remotely. “For a company like us to go from partially virtual to fully virtual in a short period of time, it’s not without its hiccups, but it has worked pretty well,” he said. Some of that 10 percent of remote workers included the team of Marcos Placona, manager for developer evangelism at Twilio. “My team has always worked on a distributed basis with direct reports in the US, UK, and across Europe,” Placona told InfoWorld. The various time zones involved make it “tough to work this way,” he admits, “but we have regular check-ins with the team and individuals with weekly one-to-ones.” Developer evangelists at Twilio still contribute code and have to track contributions, alongside writing documentation and filtering through reams of customer feedback. During the pandemic this team has shifted to holding daily remote stand-ups.


A Tale of 3 Breaches: Incident Response Challenges

A Tale of 3 Breaches: Incident Response Challenges
Three recently disclosed health data security incidents - including the discovery of a large email hack that happened nearly a year ago - serve as reminders of the ongoing incident response challenges facing healthcare organizations. A 2019 email hacking incident that affected 112,000 individuals was disclosed last week by Dearborn, Michigan-based Beaumont Health. Also recently reported were: a February ransomware attack on Wilmington, Del.-based substance abuse treatment provider Brandywine Counseling and Community Services that affected clinical records of an undisclosed number of patients, and a phishing scam impacting more than 27,000 patients and employees of Wisconsin-based Advocate Aurora Health. The COVID-19 crisis is likely to make it even more difficult for healthcare organizations to respond to security incidents, some observers say. "As long as COVID-19 drives IT activities in supporting remote workers and setting up patient triage tents with access to technology infrastructure, IT may have difficulty monitoring network activity for anomalous events unless a security operations center is in place to monitor around the clock, along with centralized log event management that can automate detection of and alerting on activities of concern," notes Keith Fricke



Quote for the day:


"Many men may see the King in a Kid but it takes a true leader to nurture it" -- Bernard Kelvin Clive


Daily Tech Digest - December 24, 2019

The neural basis of confirmation bias

This shows a woman's head and neural network
Researchers combined functional magnetic resonance imaging (fMRI) with the behavioral task. Participants’ blood oxygen level-dependent (BOLD) variables were examined through moderated mediation analysis, capturing a relationship between brain activity and multiple levels of performance, and testing whether the mediation is different for conditions of agreement and disagreement. When participants learned their partners agreed with their opinions, they significantly increased their bets, thus confirming they were confident with their decision. Participants only slightly decreased their wagers when their partners disagreed. The impact of the partner’s opinion was far greater when it confirmed the player’s judgment, and the partner’s opinion was more likely to be disregarded when it was contradictory — consistent with confirmation bias. The functional brain imaging data revealed a region whose activity modulation was associated with decision-making and memory. The posterior medial prefrontal cortex mediated the strength of confirming opinions over disconfirming opinions, and tracked agreements more closely than disagreements.



Building Tech Solutions And Giving Back

Shot of a Working Data Center With Rows of Rack Servers. Led Lights Blinking and Computers are Working.
One of my favorite success stories is a lady that had gone through Coburn Place as a resident, which means she was abused and needed to find shelter for herself and her children. We were doing our Holiday Home party where we provide Christmas trees, ornaments and other holiday decorations to the residents to have for their apartments. She came in with a pink t-shirt, her hair was pink and she came up to me asking for pink lights for her Christmas tree. She had a great smile and her positive attitude was contagious. I asked her why she wanted pink lights and she said her cancer was gone and she wanted everything to be pink for the Holidays. For any person to have gone through what she did and to have that positive attitude and outward happiness to me, defines success. ... Focusing on what we know about our industry and using it to educate the next generation through their teachers will provide more connection between what the kids are learning and a potential career in technology.


4 tips to help keep your APIs safe

istock-854566388-1.jpg
Even when enterprises do all the right things and make sure everything is protected, they can still be at risk of breaches or attacks thanks to third-party services. Waugh said a fair chunk of the breaches he sees are not direct attacks on a company system but a compromise of a third party that has access to that system to process data.  "As an industry, we do a really poor job of understanding risk when it comes to third parties. As much as we work to keep ourselves secure, we have a very limited understanding of what third parties we have out there. How do we secure those?" he said. Companies should have a thorough understanding of third-party partners accessing their data, sending them security questionnaire, requests for certifications or demanding reports. But even this, Waugh said, can still leave companies vulnerable to attacks. Last year India's national ID database, which has identity and biometric information like fingerprints and iris scans on more than 1.1 billion registered Indian citizens, was exposed through a vulnerable API. According to ZDNet's Zach Whittaker, a utility provider, Indane, had access to the Aadhaar database through an API, which the company relies on to check a customer's status and verify their identity.


Scientists Develop World’s First ‘Unhackable’ Encryption System

Largest encryption key cracked
The chip designed by the researchers generates one-time-only key when data is sent through it. The data is stored as light and passed through a specially designed chip that bends and refracts the light to scramble the information. The trick behind the tech is that the bending and refracting of light is unique every time as it depends upon the data being sent through the chip. It would be safe to say that the chip is a physical realization of the OTP mechanism which is popularly used today to authenticate several services. A paper published in the Nature journal titled “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips” says that the new technology exploits correlated chaotic wavepackets, that are mixed in inexpensive and CMOS compatible silicon chips. The specially engineered chips can deliver 0.1 Tbit of different keys for every mm of the length of the input channel. According to Professor Andrea Di Falco from the School of Physics and Astronomy at St Andrews University, “It’s the equivalent of standing talking to someone using two paper-cups attached by string.”


How Can a Digital Twin Create a Seamless Workplace for Employees?

Robot twins on a colorful background
The way we work is changing. If you look at industry news headlines, you’ll see articles about the rise of shared working spaces, flexible work hours and remote working. Yes, Millennials and Gen Z’ers are big drivers of this change, but the conversation isn’t limited to the younger generations. In fact, in today’s economy, there is a multigenerational global talent war in many industries — like tech, finance and telecom — where workers of all ages and demographics are asking for flexible working arrangements, remote work and a more holistic perspective on productivity in exchange for their loyalty. Moreover, in our increasingly connected world, employees want their office environments to be as smart as their homes, cars and digital communities, with the ability to create a personalized experience. However, many traditional office buildings still operate in the “dark” with limited use of modern technology and little ability for employees to interact dynamically with their environment.


Slowing Data Security Tool Sprawl in a Hybrid Multicloud World

IT technicians work on laptops in a data center
As information-as-a-service (IaaS), software-as-a-service (SaaS) and database-as-a-service (DBaaS) consumption becomes commonplace for enterprises, their data is becoming more dispersed than ever, making it extremely difficult for organizations to discover, visualize and protect their sensitive data across multiple environments. The same IBV study found that only 38 percent of organizations have the procedures and tools in place to operate a multicloud environment. Moreover, as data and workflows continue to move to the cloud, security teams are becoming inundated with security and compliance point tools, each designed to be used within specific environments and/or use cases. This is leading to what many refer to as “tool sprawl.” Tool sprawl can add significant operational complexity, not just in terms of security teams having to leverage disjointed dashboards and piecemeal reports, but it can lead to ineffective workflows and processes as well.


These AI-Powered Digital Health Devices Debut At CES 2020

High Technologies in the future. Young woman's eye and high-tech concept, augmented reality display, wearable computing
Bruce Sharpe, Founder and CEO of Singular Hearing said that more than 466 million people worldwide are affected by some degree of hearing loss. “This staggering number is rapidly increasing. Modern hearing aids are miracles of miniaturization, customizability and flexibility. It's not their fault that noise has continued to be a problem,” said Sharpe. “Even powerful desktop computers were not good at handling noise. That is, until the last few years when good machine learning approaches came along.” “Now we finally have new, more effective solutions for noise, thanks to the perfect storm of new machine learning algorithms, the availability of better data sets, powerful GPUs for training, and smartphones in everyone's pocket that are capable of running the results,” added Sharpe. The Canadian startup’s new product, HeardThat, is an app that uses AI to tune out background noise, which enables users with hearing loss to hear speech more clearly.


How the CIO fought their way back from the edge of extinction


Some CIOs chose to draw a line in the sand. Their IT departments had a finite amount of resources and couldn't afford to implement dashboards for every person in every department. However, line-of-business employees weren't to be denied – and they had a crucial ally in the form of cloud computing. If the CIO wouldn't provision the tools they wanted, then workers chose to simply use their own departmental budgets to buy their own applications. Workers started circumnavigating IT departments entirely, establishing direct relationships with cloud vendors and signing up to their own software-licensing deals. The rise of shadow IT led many experts to speculate that the role of CIO was on borrowed time. After all, who needs a traditional IT director when the rest of the business can buy their own applications and run them on their own devices? The answer, in the end, turned out to be pretty much everyone. While it's easy to buy an application, it's not so easy to ensure the governance of a service is established, that the date is secure and that the software can be turned off – and the data extracted easily – when the contract is cancelled. It was here that CIOs excelled.


5 Ways Business Data Is Changing How People View Green Energy

using big data for clean energy
While many companies are analyzing big data and choosing to adopt green energy, others believe that either there isn’t enough information to support it or it’s too confusing. Sustainability efforts are often difficult to measure because they affect society on a large scale. The specific implications, benefits and disadvantages are therefore hard to pinpoint and visualize. Further, the impact of these efforts is not always immediately obvious. As green energy has only become mainstream within the last few decades, there’s only so much evidence to support it. This information also comes in the form of different metrics and measurement systems. Businesses don’t know which one metric is better than the other and which would best fit their energy needs. Even leading brands need help deciding which ones will help them improve their efforts. While data to support green energy does exist, there’s still much to be understood about it. In the future, the green energy sector can gather more supporting evidence and information by continuing to monitor sustainability efforts.


IoT Security: How Far We've Come, How Far We Have to Go

Compounding the danger of IoT threats is the rise of nation-state attackers, who are targeting firmware at scale or leveraging connected devices in DDoS attacks. They don't have to attack a major entity in order to have far-reaching effects, either: as NotPetya demonstrated, a nation-state actor could target one single component supplier to have devastating consequences. Organizations' attitude toward IoT security is similar to their approach to smartphones several years back, Heiland says. Now, they're in the early stages of how they'll improve their business model and put together processes to stay secure. At the same time, standards and regulations are emerging to inform manufacturers how to build security into these devices from the start. A combination of poor device security and higher interest among attackers is driving businesses to pay more attention to the IoT. "The attack surface they're responsible for has grown so immensely," says Mike Janke, CEO of DataTribe, where a group of advisory CISOs uses the term "shadow IoT" to refer to the smartwatches, headphones, and tablets appearing on networks.



Quote for the day:


"If you don't make things happen then things will happen to you." -- Robert Collier