Showing posts with label InsurTech. Show all posts
Showing posts with label InsurTech. Show all posts

Daily Tech Digest - January 13, 2026


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



When AI Meets DevOps To Build Self-Healing Systems

Self-healing systems do not just react to events and incidents — they analyse historic data, identify early triggers or symptoms of failures, and act. For example, if a service is known to crash when it runs out of memory, a self-healing system can observe metrics like memory consumption, predict when the service may fail with very low memory, and take action to fix the issue—like restarting the service or allocating more memory—without human intervention. In AIOps, self-healing systems are powered by data science in terms of machine learning models, real-time analytics, and automated workflows. ... Self-healing systems don’t just rely on static rules and manual checks; they utilise real-time data streams and apply pattern and anomaly detection through machine learning to ascertain the state of the environment. A self-healing system is trying to gauge its own health all the time — CPU utilisation, latency, memory, throughput, traffic, security anomalies, etc — to preemptively address an impending failure. The key component of every self-healing system is a cycle that reflects the process followed by intelligent agents: Detect → Diagnose → Act. ... The integration of artificial intelligence and DevOps signifies an important change in the way modern IT systems are built, managed, and evolved. As we have discussed here, AIOps is not just an extension of a type of automation — it is changing the way operations are modelled from reactive to intelligent, self-healing ecosystems.


Building a product roadmap: From high-level vision to concrete plans

A roadmap provides the anchor to keep everyone aligned amid constant flux. Yet many organizations still treat roadmaps as static artifacts — a one-and-done exercise intended to appease executives or investors. That’s a mistake. The most effective roadmaps are living documents evolving with the product and market realities. ... If strategy defines direction, milestones are the engine that keeps the train moving. Too often, teams treat milestones as arbitrary checkpoints or internal deadlines. Done right, these can become powerful tools for motivation, alignment and storytelling. ... The best roadmaps aren’t written by PMs — they’re co-authored by teams. That’s why I advocate for bottom-up collaboration anchored in executive alignment. Before any roadmap offsite, sync with the CEO or leadership team. Understand what they care about and why. If they disagree with priorities, resolve those conflicts early. Then bring that context into a team workshop. During the session, identify technical leads — those trusted voices who can translate into action. Encourage them to pre-think tradeoffs and dependencies before the group session. ... The perfect roadmap doesn’t exist and that’s the point. Remember, the goal isn’t to build a flawless plan, but a resilient one. As President Dwight D. Eisenhower said, “Plans are useless, but planning is indispensable.” ... Vision without execution is hallucination. But execution without vision is chaos. The magic of product leadership lies in balancing both: crafting a roadmap that’s both inspiring and achievable.


Scattered network data impedes automation efforts

As IT organizations mature their network automation strategies, it’s becoming clear that network intent data is an essential foundation. They need reliable documentation of network inventory, IP address space, topology and connectivity, policies, and more. This requirement often kicks off a network source of truth (NSoT) project, which involves network teams discovering, validating, and consolidating disparate data in a tool that can model network intent and provide programmatic access to data for network automation tools and other systems. ... IT leaders do not understand the value of NSoT solutions. The data is already available, although it’s scattered and of dubious quality. Why should we spend money on a product or even extra engineers to consolidate it? “Part of the issue is that we’ve got leadership that are not infrastructure people,” said a network engineer with a global automobile manufacturer. “It’s kind of a heavy lift to get them to buy into it, because they see that applications are running fine over the network. ‘Why do I need to spend money on this is?’ And we tell them that the network is running fine, but there will be failures at some point and it’s worth preventing that.” ... NSoT isn’t a magic bullet for solving the problems IT organizations have with poor network documentation and scattered operational data. Network engineering teams will need to discover, validate, reconcile, and import data from multiple repositories. This process can be challenging and time-consuming. Some of this data will difficult to find. 


What insurers expect from cyber risk in 2026

Cyber insurers are beginning to use LLMs to translate internet scale data into structured inputs for underwriting and portfolio analysis. These applications target specific pain points such as data gaps and processing delays. Broader change across pricing or risk selection remains gradual. ... AI supported workflows begin to reduce repetitive tasks across those stages. Automation supports data entry, document review, and routine verification. Human oversight remains central for judgment based decisions. The research links this shift to measurable operational effects. Fewer manual touches per claim reduce processing time and error rates. Claims teams gain capacity without proportional increases in staffing. ... Age verification and online safety legislation introduce unintended cyber risk. Requirements that reduce online anonymity create high value identity datasets that attract attackers. The research highlights rising exposure to identity based coercion, insider compromise, and extortion. Once personal identity data is leaked, attackers gain leverage that can translate into access to corporate systems. This dynamic supports long term campaigns by organized groups and state aligned actors. ... Data orchestration becomes a core capability. Insurers and reinsurers integrate signals including security posture, threat activity, and loss experience into shared models. Consistent views across teams and regions support portfolio governance. This shift places emphasis on actionability. Data value depends on timing and relevance within workflows rather than volume alone. 


Human + AI Will Define the Future of Work by 2027: Nasscom-Indeed Report

This emerging model of Humans + AI working together is reported as the next phase of transformation, where success depends on how effectively AI will augment human capabilities, empower employees, and align with organizational purpose. The report highlights that the most effective human–AI partnerships are emerging across higher-order activities such as scope definition, system architecture, and data model design. At the same time, more routine and repeatable tasks, including boilerplate code generation and unit test creation, are expected to be increasingly automated by AI over the next two to three years. ... To stay relevant in a Human + AI workplace, the report emphasizes that individuals should build capability, adaptability, and continuous learning. This includes experience with using AI tools (prompting, critical review of output, combining AI speed with human judgment), moving up the value chain (e.g., developers from coding to architecture thinking), building multidisciplinary skills (tech + domain + professional skills), and focusing on outcomes over credentials by creating repositories of work samples showing measurable impact. ... Organizations have already started taking measures to address these challenges. Every seven in ten HR leaders are focusing on upskilling, more than half focusing on modernizing systems. With respect to AI adoption, 79% prioritize internal reskilling as a dominant strategy. 


From vulnerability whack-a-mole to strategic risk operations

“Software bills of materials are just an ingredients list,” he notes. “That’s helpful because the idea is that through transparency we will have a shared understanding. The problem is that they don’t deliver a shared understanding because the expectation of anyone in security who reads the SBOM is the first job they’ll do is run those versions against vulnerability databases.” This creates a predictable problem: security teams receive SBOMs, scan them for vulnerabilities, and generate alerts for every CVE match, regardless of whether those vulnerabilities actually affect the product. ... To make SBOMs truly useful, Kreilein introduces VEX (Vulnerability Exploitability Exchange), an open standards framework that addresses the context problem. VEX provides four status messages: affected, not affected, under investigation, and fixed. “What we want to start doing is using a project called VEX that gives four possible status messages,” Kreilein explains. ... Developers aren’t refusing to patch because they don’t care about security. They’re worried that upgrading a component will break the application. “If my application is brittle and can’t take change, I cannot upgrade to the non-vulnerable version,” Kreilein explains. “If I don’t have effective test automation and integration and unit testing, I can’t guarantee that this upgrade won’t break the application.” This reframing shifts the security conversation from compliance and mandates to engineering fundamentals. Better test coverage, better reference architectures, and better secure-by-design practices become security initiatives.


AI backlash forces a reality check: humans are as important as ever

Companies are now moving beyond the hype and waking up to the consequences of AI slop, underperforming tools, fragmented systems, and wasted budgets, said Brooke Johnson, chief legal officer at Ivanti. “The early rush to adopt AI prioritized speed over strategy, leaving many organizations with little to show for their investments,” Johnson said. Organizations now need to balance AI, workforce empowerment and cybersecurity at the same they’re still formulating strategies. That’s where people come in. ... AI is becoming less a tech problem and more of an adoption hurdle, Depa said. “What we’re seeing now more and more is less of a technology challenge, more of a change management, people, and process challenge — and that’s going to continue as those technologies continue to evolve,” he said. DXC Technology is taking a similar approach, designing tools where human insight, judgment, and collaboration create value that AI can’t do alone, said Dan Gray, vice president of global technical customer operations at the company. ... Companies might have to accept underutilizing some of the AI gains in the near term. AI could help workers complete their tasks in half the time and enjoy a leisurely pace. Alternately, employees might burn out quickly by getting more work. “If you try to lay them off, you don’t have a good workforce left. If you let them be, why are you paying them? So that’s a paradox,” Seth said.


Physical AI is the next frontier - and it's already all around you

Physical AI can be generally defined as AI implemented in hardware that can perceive the world around it and then reason to perform or orchestrate actions. Popular examples including autonomous vehicles and robots -- but robots that utilize AI to perform tasks have existed for decades. So what's the difference? ... Saxena adds that while humanoid robots will be useful in instances where humans don't want to perform a task, either because it is too tedious or too risky, they will not replace humans. That's where AI wearables, such as smart glasses, play an important role, as they can augment human capabilities. But beyond that, AI wearables might actually be able to feed back into other physical AI devices, such as robots, by providing a high-quality dataset based on real-life perspectives and examples. "Why are LLMs so great? Because there is a ton of data on the internet, for a lot of the contextual information and whatnot, but physical data does not exist," said Saxena. ... Given the privacy concerns that may come from having your everyday data used to train robots, Saxena highlighted that the data from your wearables should always be kept at the highest level of privacy. As a result, the data -- which should already be anonymized by the wearable company -- could be very helpful in training robots. That robot can then create more data, resulting in a healthy ecosystem. "This sharing of context, this sharing of AI between that robot and the wearable AI devices that you have around you is, I think, the benefit that you are going to be able to accrue," added Asghar.


Unlocking the Power of Geospatial Artificial Intelligence (GeoAI)

GeoAI is more than sophisticated map analytics. It is a strategic technology that blends AI with the physical world, allowing tech experts to see, understand, and act on patterns that were previously invisible. From planning sustainable cities to protecting wildlife, it’s helping experts tackle significant challenges with precision and speed. As the world generates more location-based data every day, GeoAI is becoming a must-have tool. It’s not just tech – it’s a way to make the world work better. ... To make it simpler. Machine learning spots trends, computer vision interprets images, GIS organizes it all, and knowledge graphs tie it together. The result? GeoAI can take a chaotic pile of data and deliver clear answers, like telling a city where to build a new park or warning about a wildfire risk. It’s a powerhouse that’s making location-based decisions faster and smarter. In all, GeoAI is transforming the speed at which we extract meaning from complex datasets, thereby enabling us to address the Earth’s most pressing challenges. ... Though powerful, GeoAI is not without challenges. Effective implementation requires careful attention to data privacy, technical infrastructure, and organizational change management. ... Leaders who take GeoAI seriously stand to gain more than just incremental improvements. With the right systems in place, they can respond faster, make smarter decisions, and get better results from every field team in the network. 


For application security: SCA, SAST, DAST and MAST. What next?

If you think SAST and SCA are enough, you’re already behind. The future of app security is posture, provenance and proof, not alerts. ... Posture is the ‘what.’ Provenance is the ‘how’. The SLSA framework gives us a shared vocabulary and verifiable controls to prove that artifacts were built by hardened, tamper‑resistant pipelines with signed attestations that downstream consumers can trust. When I insist on SLSA Level 2 for most services and Level 3 for critical paths, I am not chasing compliance theater; I am buying integrity that survives audit and incident. Proof is where SBOMs finally grow up. Binding SBOM generation to the build that emits the deployable bits, signing them and validating at deploy time moves SBOMs from “ingredient lists” to enforceable controls. The CNCF TAG‑Security best practices v2 paper is my practical map, personas, VEX for exploitability, cryptographic verification to ensure tests actually ran, and prescriptive guidance for cloud‑native factories. ... Among the nexts, AI is the most mercurial. NIST’s final 2025 guidance on adversarial ML split threats across PredAI and GenAI and called out prompt injection in direct and indirect form as the dominant exploit in agentic systems where trusted instructions co mingle with untrusted data. The U.S. AI Safety Institute published work on agent hijacking evaluations, which I treat as required red‑team reading for anyone delegating actions to tools.

Daily Tech Digest - July 08, 2024

How insurtech startups are addressing the challenges of slow processes in the insurance sector

Even though compliance and regulation are critical for the security of both the insurers and customers, the regulatory process could be quite long. Compliance requirements demand meticulous attention to detail and can significantly prolong the approval process for new products and services. Another factor can be risk aversion. It (risk aversion) within the industry fosters a culture of caution, where insurers are hesitant to embrace change and experiment with new approaches to product development and underwriting. ... One of the solutions for these industrial challenges lies in the collaboration of the insurance sector and the latest technologies. Insurtech solutions offer myriad innovative tools and technologies that promise to streamline product development and automate underwriting processes. One such solution gaining traction is artificial intelligence (AI) and machine learning algorithms, which can analyse vast amounts of data in real time to assess risk and expedite underwriting decisions. 


Transforming Business Practices Through Augmented Intelligence

While AI raises apprehensions about potential job displacement, viewing it solely as a threat overlooks its capacity to enhance human capabilities, as evidenced by historical technological advancements. Training and education play a key role in this process, as AI has become an integral part of our reality and must be harnessed to its full potential. It is essential to align the use of artificial intelligence with the overall strategy of the organization for smooth integration of applications with data, processes, and collaboration between stakeholders. In a landscape where the internet simplifies transactions, software provides tools, and AI leverages data to make informed decisions, training and education become crucial. ... At its core, technology has always revolved around processing data. When viewed through the lens of enterprise architecture, an AI-powered machine learning tool can adeptly craft roadmaps tailored for businesses. Through advanced AI analytics, automation, and recommendation systems, enterprise architecture facilitates more informed and expedited decision-making processes.


Request for proposal vs. request for partner: what works best for you?

An RFProposal is an efficient choice when the nature of the work is standardized, while an RFPartner is the better choice when the buying organization is seeking a strategic partner for the overall best fit to meet its needs. ... When organizations shift to wanting to find a partner with the best possible solution, it’s important to understand the nature of the selection criteria change. With an RFPartner, buyers evaluate suppliers not only based on technical capabilities but also on the best value of the solution. ... “On the surface, an RFPartner sounds like a heavy lift, but we find that the overall time and effort is about the same,” he says. “In an RFProposal, the buyer is spending more time upfront defining the specs and in contentious negotiations. The RFPartner process flips this on its head and creates a more integrated bid solution that generates better solutions, spending more time together with the supplier co-creating, especially if your aim is making the shift to a highly collaborative vested business model to achieve strategic business outcomes.”


If you’re a CISO without D&O insurance, you may need to fight for it

D&O insurance covers the personal liabilities of corporate directors and officers in the event of incidents that lead to financial losses, reputational damage, or legal consequences. Without adequate D&O coverage, CISOs are left vulnerable, highlighting the need for this in an organization’s risk-management strategy. ... Lisa Hall, CISO at privately held Safebase, agrees that CISOs at all companies should be covered under their organizations’ D&O insurance policies, particularly in light of these new regulations. “I do think adding CISOs to D&O insurance will be more and more of a thing, and there is, for sure, more chatter in my CISO groups about how companies are handling this,” she says. “A lot of CISOs are also taking out errors and omissions insurance personally. I have that just for the consulting and advisory work I do.” ... “A lot of CISOs are thinking about this, especially after SolarWinds,” she says. “And if we feel that we’re not 100% protected for any decision we make, and we can be personally liable for a breach or possible incident even if we do the right thing, it’s really pushing CISOs to say, ‘Hey, company, I’ll join if you cover me or give me a different title.’ “


How DORA is fortifying Europe’s financial future with a new take on operational resilience

For DORA, digital operational resilience very simply means “the ability of a financial entity to build, assure, and review its operational integrity and reliability by ensuring, either directly or indirectly through the use of services provided by ICT third-party service providers, the full range of ICT-related capabilities needed to address the security of the network and information systems which a financial entity uses, and which support the continued provision of financial services and their quality, including throughout disruptions”. Developing on this statement in a conversation with FinTech Futures, Simon Treacy, a senior associate at global law firm Linklaters, describes DORA as “a very prescriptive framework for financial entities, primarily to build and improve the way that they manage ICT risk”. “It applies very broadly across the EU regulated financial sector,” he continues, “and really part of its aim is to harmonise standards so that the smallest payments firm is subject to the same rules for operational resilience as the biggest banks and insurers.”


Data Sprawl: Continuing Problem for the Enterprise or an Untapped Opportunity?

Data fabric technologies excel in integrating and managing data across various environments. However, they often focus on conventional data sources like databases, data lakes, or data warehouses. The result is a gap in integrating and extracting value from data residing in numerous SaaS applications, as they may not seamlessly fit into these traditional data repositories. The combined solution of data fabric and iPaaS can address complex business challenges, such as integrating data from SaaS applications with traditional data sources. This capability is particularly valuable in today’s business landscape, where data is increasingly scattered across various cloud and on-premises environments. The merging of data fabric and iPaaS technologies offers a groundbreaking solution to this challenge, opening the door to new opportunities in data management and analysis. The integration of data fabric with iPaaS addresses the complexity and expertise-dependency in iPaaS. Data fabric can enable users to discover, understand, and verify data before integration flows are built. 


AI’s moment of disillusionment

AI, whether generative AI, machine learning, deep learning, or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.” ... We go through this process of inflated expectations and disillusionment with pretty much every shiny new technology. Even something as settled as cloud keeps getting kicked around. My InfoWorld colleague, David Linthicum, recently ripped into cloud computing, arguing that “the anticipated productivity gains and cost savings have not materialized, for the most part.” I think he’s overstating his case, but it’s hard to fault him, given how much we (myself included) sold cloud as the solution for pretty much every IT problem.


How nation-state cyber attacks disrupt public services and undermine citizen trust

While nation-states do have advanced capabilities and visibility that are hard or impossible for cyber criminals to replicate, the general strategy for attackers is to target vulnerable perimeter devices such as VPNs or firewalls as an entry point to the network. Next they focus on obtaining privileged credentials while leveraging legitimate software to masquerade as normal activity while they scout the environments for valuable data or large repositories to disrupt. It’s important to note that the commonly exploited vulnerabilities in government IT systems are not distinctly different from the vulnerabilities exploited more broadly. Government IT systems are often extremely diverse and thus, subject to a variety of exploits. ... Currently, there are numerous policies and regulations, both domestically and internationally, which are inconsistent and vary in their requirements. These administrative requirements take significant resources which could otherwise be used to strengthen a company’s cybersecurity program. 


How Quantum Computing Will Revolutionize Cloud Analytics

As we peer into the future of quantum computing in cloud analytics, the emphasis on collaboration and continuous innovation becomes undeniable. Integrating quantum technologies with cloud systems is not just a technological upgrade but a paradigm shift requiring robust partnerships across academia, industry, and government sectors. For instance, IBM’s quantum network includes over 140 members, including start-ups, research labs, and educational institutions, working together to advance quantum computing. This collaborative model is essential because the challenges in quantum computing are not just about hardware or software alone but about creating an ecosystem that supports an entirely new kind of computing. That ecosystem comprises components such as quantum hardware development, quantum algorithms, software tools, and educational resources. Also, it has made significant achievements, such as developing quantum hardware such as the IBM Quantum System One, advancing quantum algorithms for practical applications in chemistry and materials science, and creating the Qiskit software development kit to make quantum programming more accessible.


How continuous learning is reshaping the workforce

Gone are the days when lengthy training programs were sought after and people took breaks from their careers to pick up an upskilling program. Navpreet Singh highlights that upskilling will become an ongoing process integrated into the workday. “The focus will shift from acquiring specific job skills to fostering adaptability and lifelong learning. Critical thinking, problem-solving, and creativity will be paramount as automation takes over routine tasks. Traditional ways of learning may not always reflect the skills needed. Alternative credentials, like badges and micro-credentials, will showcase the specific skills employees possess, making them more competitive. By embracing this future of upskilling, we can ensure our workforce is adaptable, future-proof, and ready to drive innovation in the ever-evolving automotive industry,” explains Singh. Within the next decade or so, we will see greater demand for agile ed-tech tools that help employees learn on the go and prepare them for new roles, says Daniele Merlerati, Chief Regional Officer APAC, Baltics, Benelux at Gi Group Holding.



Quote for the day:

"Perseverance is failing nineteen times and succeeding the twentieth." -- Julie Andrews

Daily Tech Digest - June 25, 2024

Six Strategies For Making Smarter Decisions

Broaden your options - Instead of Options A and B, what about C or even D? A technique I use in working with client organizations is to set up a “challenge statement” that inevitably reveals multiple possibilities to be decided upon. I’ll have small groups of four or five people take 10 minutes to list all the options without discussing or critiquing them during the exercise. Frame challenge statements thusly: “In what ways might we accomplish X?” ... Listen to your gut - Intuition is knowing something without knowing quite how we know it. All of us have it, but in a data-driven world, listening to it becomes harder. Before making an important choice, one executive I interviewed gathers information, weighs all the facts – then takes time to stop and listen to what his gut is telling him. “When a decision doesn’t feel good,” one executive commented, “It feels like a stomachache. And when a decision feels right, it’s like I’ve eaten a great meal. If I don’t feel good in my gut about a decision, I don’t care if the numbers say we’re going to make a billion dollars, I won’t go ahead with it. That’s how important intuition is to me.”


Overcoming Stagnation And Implementing Change To Facilitate Business Growth: The How-To

Overcoming stagnation is about understanding that doing the same thing over and over again will give you the same results over and over again. But bringing about change in the former will naturally impact the latter. The three main objectives in any transformation initiative that aims to set up a strong foundation to scale or grow a business are: become financially lean with the ability to scale either up or down as per market demands, become internally efficient, and to run its day-to-day operations independent of its founder or leader. ... Ideally, it would be wise to aim to maintain 60-70% of the total operating cost as fixed costs, while keeping the remaining as variable costs, allowing for flexibility to adjust the costing structure based on business needs, while maintaining profitability throughout the transition- and beyond. When an efficient business achieves this level of financial optimization and is managed by a competent team, then the founder or leader will have the time to work on the business, concentrating on long-term growth strategic issues, instead of the day-to-day of the enterprise.


Build your resilience toolkit: 3 actionable strategies for HR leaders

Go beyond current job descriptions to identify talent or skill gaps. Focus on future-focused talent acquisition strategies and design upskilling and reskilling programs. Aim to close the skills gap and attract talent with transferable skill sets and a growth mindset. This approach keeps your workforce adaptable and prepared for future challenges. ... Adapting work models and fostering continuous learning cultures are essential. HR leaders can implement flexible work arrangements, such as remote or hybrid models. Encouraging experimentation and risk-taking within teams, and integrating continuous learning opportunities into performance management systems, are key actionable tips. Agile approaches help HR leaders adapt quickly to shifting business requirements. Collaborative work environments are critical in an agile HR strategy. ... Open communication and safe spaces are essential for a supportive culture. HR leaders can encourage employees to voice concerns by creating channels for open dialogue. This approach ensures employees feel heard and valued, contributing to a more inclusive workplace.


The 4 skills you need to engineer a career in automation

Automation engineers are often required to work cohesively with multidisciplinary teams and for that reason, it can be useful to have a solid grasp of workplace soft skills, in addition to compulsory hard skills. Automation engineers are expected to take complex, highly nuanced information and relay it back to not only their peers, but to people who do not have a strong technical background. This requires expert communication skills, as well as an ability to collaborate. ... If you are considering a career as an automation engineer, then a foundational understanding of programming languages and how they are applied is compulsory, as you will frequently need to write and maintain the code that keeps operation systems running. The choice of programming language greatly impacts the success of automation in the workplace, as it will provide and improve versatility, scalability and integration. ... As AI advances, global workplaces will have to evolve in tandem, meaning automation engineers will have to have a standard level of AI and machine learning skills to stay competitive. 


Navigating the Evolving World of Cybersecurity Regulations in Financial Services

Accountability for cybersecurity measures is a key element of the NYDFS regulations. CISOs now must provide a report updating their governing body or board of directors on the company’s cybersecurity posture and plans to fix any security gaps, Burke says. Maintaining accountability entails communicating with the board about cybersecurity risks, explains Kirk J. Nahra, partner and co-chair of the cybersecurity and privacy practice at law firm WilmerHale. “The board needs to understand that its job is to evaluate major issues for a company, and a ransomware attack that shuts down the whole business is a major risk,” Nahra says. “The boards have to become more sophisticated about information security.” ... The NYDFS calls for organizations to have cybersecurity policies that are reviewed and approved annually. Previously, regulations concentrated more on processes and best practices, Nahra says. Now, they are becoming more prescriptive, but multiple regulators are inconsistent, and their standards may conflict at times.


How Banks Can Get Past the AI Hype and Deliver Real Results

If the bank’s backend systems aren’t automated, all the rapidly responding chatbot has done is make a promise that a human will have to solve when they finally get to that point in the inbox, Bandyopadhyay says. When they ultimately get back to the customer, that efficient chatbot doesn’t actually look so efficient. Bandyopadhyay explains that this is merely meant as an illustration of the bank has to be ready for front ends and back ends of customer-facing systems to be in synch. The potential result is alienating customers with significant problems. ... The real power of GenAI is its ability to digest and deploy unstructured data. But Bandyopadhyay points out that most banks use legacy systems that can’t capture any of that information. “It’s not data that you put in rows and columns on a spreadsheet,” says Bandyopadhyay. “It’s what language we write and that we speak.” To truly implement GenAI in the long run, he continues, banks will have to lick the longstanding legacy systems problem. Until then, most of their databases aren’t talking GenAI’s language.


Singapore lays the groundwork for smart data center growth

In a move that stunned industry observers, Singapore announced on May 30 that it would release more data center capacity to the tune of 300MW, a substantial figure and a new policy direction for the nation-state. ... The 300MW will come as part of a newly unveiled Green Data Centre (DC) Roadmap drawn up by IMDA, so it does have conditions attached. According to the statutory board, the roadmap was developed to chart a “sustainable pathway” for the continued growth of data centers in Singapore to support the nation’s digital economy. Per the roadmap, Singapore hopes to work with the industry to pioneer solutions for more resource-efficient data centers. One way to view it is as a carrot that it can use to spur data center operators to innovate and accelerate data center efficiency on both hardware and software levels. It is all well and good to talk about allocating hundreds of megawatts of capacity for data centers. But with electrical grids around the world heaving from electrification and sharply rising power demands, is Singapore in a position to deliver this capacity to data center operators today?


Information Blocking of Patient Records Could Cost Providers

Information blocking is defined as a practice that is likely to interfere with the access, exchange or use of electronic health information, except as required by law or specified in one of nine information blocking exceptions. ... Under the security exception, it is not considered information blocking for an actor to interfere with the access, exchange or use of EHI to protect the security of that information, provided certain conditions are met. For example, during a security incident, such as a ransomware attack, a healthcare provider might be unable to provide access or exchange to certain EHI for a time, and that would not constitute information blocking. ... So, as of now, if a healthcare provider does not participate in any of the CMS payment programs that are currently subject to the disincentives, they do not face any potential penalties for information blocking. But that could change moving forward. HHS officials during a briefing with media on Monday said HHS is considering adding other disincentives for healthcare providers that do not participate in such CMS programs. 


How is AI transforming the insurtech sector?

The use of AI also brings risks and ethical considerations for insurers and insurtech firms. “With all AI, you need to understand where the AI models are from and where the data is being trained from and, importantly, whether there is an in-built bias,” says Kevin Gaut, chief technology officer at insurtech INSTANDA. “Proper due diligence on the data is the key, even with your own internal data.” It’s essential, too, that organisations can explain any decisions that are taken, warns Muylle, and that there is at least some human oversight. “A notable issue is the black-box nature of some AI algorithms that produce results without explanation,” he warns. “To address this, it’s essential to involve humans in the decision-making loop, establish clear AI principles and involve an AI review board or third party. Companies can avoid pitfalls by being transparent with their AI use and co-operating when questioned.” AI applications themselves also raise the potential for organisations to get caught out in cyber-attacks. “Perpetrators can use generative AI to produce highly believable yet fraudulent insurance claims,” points out Brugger. 


Evaluating crisis experience in CISO hiring: What to look for and look out for

So long as a candidate’s track record is verifiable and clear in its contribution to intrusion events, direct experience of a crisis may actually be more indicative of future success than more traditional metrics. By contrast, be wary of the “onlookers,” those individuals with qualifications but whose learned experience comes from arm’s length involvement in a crisis. While such persons may contribute positively to their organization, the role of the crisis in their hiring should be de-emphasized relative to more conventional metrics of future performance. ... The emerging consensus of research is that being present for multiple stages of the response lifecycle — being impacted by an attack’s disruptions or helping with preparedness for a future response — is far better experience than simply witnessing an attack. Those who experience the initial effects of a compromise or other attack and then go on to orient, analyze, and engage in mitigation activities are the ones for whom over-generalization and perverse informational reactions appear less likely.



Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden

Daily Tech Digest - December 14, 2023

Moral Machines: The Importance of Ethics in Generative AI

A transparent model can provide better functionality than an opaque model, as it provides users with explanations for its outputs. An opaque model does not need to explain its reasoning process, which introduces risk and potential liability if unexpected or inaccurate results are provided by a generative AI tool. This lack of visibility also makes opaque models more difficult to test than their transparent counterparts. As such, it’s important to consider generative AI tools with high transparency when working to build ethical systems. Explainability of AI models is another important aspect of creating ethical systems yet challenging to control. AI models, specifically deep learning models, use thousands upon thousands of parameters when creating an output. This type of process can be nearly impossible to track from beginning to end, which limits user visibility. Lack of explainability has already been demonstrated in real-world problems; we’ve seen many examples of AI hallucinations, such as the Bard chatbot error in February 2023, which occurs when a model provides an output that is entirely false or implausible.


12 Software Architecture Pitfalls and How to Avoid Them

Reusing an existing architecture is seldom successful unless the QARs for the new architecture match the ones that an existing architecture was designed to meet. Past performance is no guarantee of future success! Reusing part of an existing application to implement an MVP rapidly may constrain its associated MVA by including legacy technologies in its design. Extending existing components in order to reuse them may complicate their design and make their maintenance more difficult and therefore more expensive. ... Architectural work is problem-solving, with the additional skill of being able to make trade-offs informed by experience in solving particular kinds of problems. Developers who have not had experience solving architectural problems will learn, but they will make a lot of mistakes before they do. ... While new technologies offer interesting capabilities, they always come with trade-offs and unintended side effects. The new technologies don’t fundamentally or magically make meeting QARs unimportant or trivial; in many cases the ability of new technologies to meet QARs is completely unknown.


CIOs weigh the new economics and risks of cloud lock-in

“It is true that hyperscale cloud providers have hit such a critical mass that they create their own gravitational pull,” he says. “Once you adopt their cloud platforms, it can be difficult and expensive to migrate out. [But] CIOs today have more choice in cloud providers than ever. It is no longer a decision between AWS and Azure. Google has been successfully executing a strategy to attract more enterprise customers. Even Oracle has made the transition from focusing on in-house technology to become a full-service cloud provider.” CIOs may consider other approaches, McCarthy adds, such as selecting a single-tenant cloud solution offered by HPE or Dell, which bundle hardware and software in an as-a-service business model that gives CIOs more cloud options. “Another alternative includes colocation companies like Equinix, which has been offering bare-metal IaaS for several years and has now created a partnership with VMware to extend those services higher up the stack,” he says, adding that CIOs should not view a cloud provider “as a location but rather as an operating model that can be deployed in service provider data centers, on-premise, or at the edge.”


Understanding the True Cost of a Data Breach in 2023

Data breaches are common in the modern world, which means even if your organization hasn’t suffered one, the chances of it happening aren’t negligible. Criminal groups stand to profit significantly from these actions, so they are innovative and invest time and money to conduct highly advanced attacks. This means that a data breach doesn’t simply appear one second and then disappear the next. An IBM report noted the average breach cycle lasts for 287 days, with businesses taking 212 days to detect it and an additional 75 to neutralize the threat. Every organization should implement preventative measures to combat threat actors. This means building and exercising safe practices, like storing information securely, adhering to clear policies and training staff to understand data protection. Ultimately, the longer a breach continues, the more expensive it becomes. The Cost of a Data Breach Report 2023 found that companies that contain a breach within 30 days save over $1 million in contrast to those that take longer, so it pays to have a strong recovery process in place.


Fortifying confidential computing in Microsoft Azure

Adding GPU support to confidential VMs is a big change, as it expands the available compute capabilities. Microsoft’s implementation is based on Nvidia H100 GPUs, which are commonly used to train, tune, and run various AI models including computer vision and language processing. The confidential VMs allow you to use private information as a training set, for example training a product evaluation model on prototype components before a public unveiling, or working with medical data, training a diagnostic tool on X-ray or other medical imagery. Instead of embedding a GPU in a VM, and then encrypting the whole VM, Azure keeps the encrypted GPU separate from your confidential computing instance, using encrypted messaging to link the two. Both operate in their own trusted execution environments (TEE), ensuring that your data remains secure. Conceptually this is no different from using an external GPU over Thunderbolt or another PCI bus. Microsoft can allocate GPU resources as needed, with the GPU TEE ensuring that its dedicated memory and configuration are secured.


From reactive to proactive: Always-ready CFD data center analysis

By synchronizing with these toolsets, digital twin models can pull all relevant, necessary data and update accordingly. The data includes objects on the floor plan, assets in the racks, power chain connections, historical power, and environmental readings, and perforated tile and return grate locations. Therefore, the digital twin model is always ready to run the next predictive scenario with current data and minimal supervision from the operational team. As part of the routine output from the software, DataCenter Digital Twin produces Excel-ready reports, capacity dashboards, CFD reports, and go/no-go planning analysis. Teams can then use this information to evaluate future capacity plans, conduct sensitivity studies (such as redundant failure or transient power failure), and run energy optimization studies as needed. Much of this functionality is available through an intuitive and accessible web portal. We know that every organization has a unique set of problems, priorities, and workflows. As such, we’ve split DataCenter Insight Platform into two offerings – DataCenter Asset Twin and DataCenter Digital Twin.


AI-Powered Encryption: A New Era in Cybersecurity

AI-powered encryption represents a groundbreaking advancement in cybersecurity, leveraging the capabilities of artificial intelligence to strengthen data protection. At its core, AI-powered encryption utilizes machine learning algorithms to continuously analyze and adapt to new cyber threats, making it an incredibly dynamic and proactive defense mechanism. By employing AI-driven pattern recognition and predictive analytics, this encryption method can rapidly identify potential vulnerabilities and create tailored encryption protocols to thwart would-be attackers. One key aspect of AI-powered encryption is its ability to autonomously adjust security parameters in real-time based on evolving risk factors. This adaptability ensures that data remains secure even as cyber threats become more sophisticated. Moreover, the integration of AI enables encryption systems to swiftly detect anomalies or suspicious activities within the network, providing an extra layer of defense against unauthorized access or data breaches. 


7 Best Practices for Developers Getting Started with GenAI

Experiment (and encourage your team to experiment) with GenAI tools and code-gen solutions, such as GitHub Copilot, which integrates with every popular IDE and acts as a pair programmer. Copilot offers programmers suggestions, helps troubleshoot code and generates entire functions, making it faster and easier to learn and adapt to GenAI. A word of warning when you first use these off-the-shelf tools: Be wary of using proprietary or sensitive company data, even when just feeding the tool a prompt. Gen AI vendors may store and use your data for use in future training runs, a major no-no for your company’s data policy and info-security protocol. ... One of the first steps to deploying GenAI well is to master writing prompts, which is both an art and a science. While prompt engineer is an actual job description, it’s also a good moniker for anyone looking to improve their use of AI. A good prompt engineer knows how to develop, refine and optimize text prompts to get the best results and improve the overall AI system performance. Prompt engineering doesn’t require a particular degree or background, but those doing it need to be skilled at explaining things well. 


Could Your Organization Benefit from Hyperautomation?

Building a sophisticated hyperautomation ecosystem requires a significant technology investment, Manders says. “Additionally, the integration of multiple technologies and tools, inherent in hyperautomation, can usher in increased complexity, making ecosystem maintenance a challenging endeavor.” Failing to establish clear goal and governance guidelines can also create serious challenges. Automation without governance could lead individual departments to create their own automation processes, which may conflict with other departments’ processes. The resulting hyperautomation silos could lead to some departments failing to take advantage of solutions fellow departments have already deployed. Additionally, every time an organization transports data to another process or platform, there’s the risk of data leaks. “If we don’t follow best practices and ensure that data is secure, this information could fall into the wrong hands,” Rahn warns. Hyperautomation may also lead adopters to dependency on a particular vendor’s ecosystem of tools and technologies. 


How insurtech is using artificial intelligence

As insurers look to become more customer centric, the coupling of AI with advanced analytics can help provide a more specific, personalised and real-time picture of insurance customers. With insurance customers coming to rely on online platforms for purchasing and managing their policies for such a particular commodity, interactions with the firms themselves are few and far and between, which can water down the user experience. However, experience orchestration — the leveraging of customer data and AI by insurance companies to create highly personalised interactions — can be implemented to improve relations long-term. Manan Sagar, global head of insurance at Genesys, explains ... This approach not only improves the customer experience but also enhances employee efficiency by automating tasks or routing calls more effectively. “As the insurance industry navigates the digital age, experience orchestration can serve as a powerful tool to uphold the tradition of trust and personal relationships that have long defined the industry. Through this, firms can differentiate themselves in an increasingly commoditised market and ensure their customers remain loyal and satisfied.”



Quote for the day:

"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle

Daily Tech Digest - March 12, 2021

Hiring developers? Here's how to keep them happy and productive

"Whiteboard coding was another thing that was just totally broken in engineering hiring. Asking people to code on a whiteboard is a different skill set. People don't do it for their day-to-day. It was silly for us to ask people to put code on a whiteboard, but we did it for years!" A better strategy for onboarding developers remotely, Pillar says, is a sort of BYOD policy, whereby hiring managers ask candidates to bring their laptops along to the interview with the understanding that they'll be performing some form of on-the-spot coding while they share their screen with the interviewer. "That's a way more productive way to get an excellent signal about the quality of a developer, because it's actually their environment and you can see them using the tools that they're familiar with," he explains. ... "A meeting is an extremely expensive thing for an engineer. It's way easier, unfortunately, to interrupt an engineer's flow in a remote world with a meeting because their calendar is open, you can just throw it in there and you don't even really think about it. Some software providers now provide analytics tools that will measure how workers' time is spent, some of which include the ability to measure interrupted time – also known as 'friction time'. 


How Security Architecture Is Shaping Up for 2021

Access is often referred to as zero-trust network access, which seems incorrect to me since its application access, which is network access, which is the old traditional VPN piece. But that architecture makes no difference if you’re on or off the network. It uses an access proxy to provide a security and control context, it’ll provide identity components for users and devices. So it gives you this application [information] and then contextual information as applied per session. That’s one architecture — one of the problems, obviously, when you have some of these architectures is that you try and build it, and you’ve got five different vendors, you’re trying to build it from code union and endpoint solution, you to proxy, you need security, you need identity, you need these other contextual engineering management [techniques]. So customers have trouble when they try to build it across maybe five or six vendors. That’s why I think it’s a really important architecture, especially when I think people are gonna be more and more often on the network and backward and forwards, it doesn’t really matter whether it’s a zero trust architecture. So that’s one really important component.


IT security strategy: A CISO's 5 essentials

One of the most common cyberattack vectors remains exploiting known vulnerabilities in OS software and applications. To combat these attacks, stay on top of the maintenance level of your hardware and software. Unsupported components should be upgraded or replaced as soon as possible. Conduct vulnerability scans for the full infrastructure monthly, and correct issues as soon as possible. Ensure your scans include third-party products and applications. ... A famous baseball coach once said, “You can observe a lot by just looking.” Make better use of the logs and reports provided by the systems and applications running your business. Delineate baselines and metrics defining security health. A change in activity patterns or metrics may be an early indicator of trouble brewing. Develop, maintain, and test a practical security incident management plan so you will know what to do if faced with a real incident. Composing a secure foundation isn’t easy in the best of times. While these five tips may not be as exciting as hunting for hackers or implementing a sophisticated security incident event management (SIEM) system, they are the building blocks of a strong foundation and offer the best way to move organizations forward safely.


Power Equipment: A New Cybersecurity Frontier

While IoT has been the catalyst for many positive developments, there are challenges with these expanding interconnections. For power management, the ability to connect backup equipment like an uninterruptible power supply (UPS) can prove helpful in enabling IT teams to monitor and maintain essential infrastructure more efficiently. However, like any other network-connected devices, they become assets that need to be secured from potential cyber breaches. Though UPS doesn't traditionally come to mind when envisioning ways cybercriminals infiltrate a network, the same could also be said for other inconspicuous devices like HVAC units. Yet, that's exactly what hackers pursued when they were able to gain access to Target's system and steal data on over 40 million credit and debit cards. And consider how hackers were able to penetrate the network of a North American casino utilizing an Internet-connected thermometer inside an aquarium. Finding the vulnerability in a fish tank, of all places, allowed hackers to access the casino's database and ultimately steal private customer data.


How do I select a SOAR solution for my business?

A SOAR solution should enable teams to automate the identification and response process across significant volumes of disparate data streams, so that the prioritisation of threats and vulnerabilities becomes almost seamless, not least far more operationally efficient. If implemented correctly, Security Operations Centres (SOC) can benefit from using SOAR solutions helping them to deal with threats faster and more efficiently. Integrating SOAR with other security tools, such as Security Information and Event Management (SIEM), can transform SOC teams business and technology outcomes through automation, while also increasing efficiency. Combining forces, organisations can use SOAR to augment the capabilities of SIEM, offering an all-comprehensive solution. SIEMs collect and store data in a useful manner which SOAR can use to automatically investigate and respond to incidents and reduce the need for manual operations. What’s more, in tackling one of the biggest challenges for SOC teams to date, SOAR solutions can help to ingest information, sort, prioritise and combine duplicate alerts to reduce the number of false positives.


Fintech Innovation Done Right: Be A Creator

Fintech can also create entirely new product categories. One mechanism I’ve explored previously are embedded fintech strategies. A financial product can be embedded into other products to change the nature of, availability and engagement model with customers. Companies like Opendoor give customers the ability to make cash offers for homes to make them more competitive. Boost allows companies to launch insurance products and bundle them into a broader offering. Zola bundles loans and mobile repayments with Pay-As-You-Go financing to unlock demand for home solar systems in Africa. Without the built-in financing, the systems would be unaffordable making the loan a core piece of the business model, rather than a feature. Similarly, many boot camps engage in income sharing agreements – rather than charging tuition, the program is repaid through a percentage of future earnings for a set period of time. Finally, players like ZhongAn have created fully automated insurance built into products. For instance, in a partnership with a telephone provider, they can automatically detect a broken screen.


Untangl CEO discusses how Insurtech startups are disrupting finance markets

“There’s no doubt that technology is going to disrupt the insurance sector like it has any other industry,” said Stewart. “But I think insurance has been particularly slow when it comes to modernising, and that’s been highlighted by the rapid shift to the cloud. “The pandemic has been another catalyst in a rethink of operations going forward, but a cultural problem has been present around an industry that’s underinvested in technology, while finding it difficult to innovate in such a risk-averse, high margin landscape.” Stewart went on to explain that companies in the space can often spend up to 18 months making decisions to solve inquiries in response to potential problems, a reflection that he described as “a reflection of how not to do it”. However, while the insurance sector has found innovating quickly with short term projects more difficult than other sectors, the past year has seen areas such as personal lines become more agile and intuitive. “It’s not easy because the industry has experts in their complex fields, who are representing stakeholders with billions in capital behind them, and any mistakes can be financially disastrous,” Stewart added.


The Brain of Security

In fairness security analysts are seeking to make risk-informed decisions, as the human brain does this instinctively. However, they can only do that based on the information they are provided. There are not many security programmes where business context was provided to the analyst to aid in decision making. Recognising this reality, organisations are seeking to quantify their cyber risk to better align security to the business, drive remediation and response activities, support investment decisions and demonstrate return on security investment. Many have already embraced the move to a quantified understanding of risk – only to be let down as current approaches require too much manual data collection, too much training and professional services support, don’t connect this newfound understanding with the ability to take action and fail to meet the need to efficiently and cost-effectively mitigate risk. Organisations need to acknowledge that understanding and quantifying risk is critical to building an effective security programme in this day and age. Solely orchestrating and automating security actions with an intelligence-led approach is not enough.


CIO Agenda for Right Now: Priorities a Year Into the Pandemic

First, the COVID-19 pandemic brought a period of rapid change and challenges for organizations, and that has accelerated technological change. Future conditions will be significantly different from the past and even from the present, according to White. Second, operating models have had to change. Now that the dust has settled, organizations will be using the rest of 2021 to review and consolidate all of the changes that have happened in organizations, White said. Third, the pandemic has raised new business priorities. Work from home has been one of them. But deeper in that trend, the pandemic has disrupted traditional research conducted by business and has raised different priorities for innovators, according to White. Plus, the work-from-home trend will drive significant organizational changes. Remote leadership poses challenges for presence and influence, according to White. Leaders and managers will need to adapt their styles to encompass non-line-of-sight supervision and performance management. Fourth, the CIO role has changed and will continue to change. Technology and the CIO's response to the pandemic, lockdown, and economic downturn, meant that many organizations were able to survive the initial crisis.


OpenAI’s state-of-the-art machine vision AI is fooled by handwritten notes

Researchers from machine learning lab OpenAI have discovered that their state-of-the-art computer vision system can be deceived by tools no more sophisticated than a pen and a pad. As illustrated in the image above, simply writing down the name of an object and sticking it on another can be enough to trick the software into misidentifying what it sees. “We refer to these attacks as typographic attacks,” write OpenAI’s researchers in a blog post. “By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model.” They note that such attacks are similar to “adversarial images” that can fool commercial machine vision systems, but far simpler to produce. Adversarial images present a real danger for systems that rely on machine vision. Researchers have shown, for example, that they can trick the software in Tesla’s self-driving cars to change lanes without warning simply by placing certain stickers on the road. Such attacks are a serious threat for a variety of AI applications, from the medical to the military.



Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart

Daily Tech Digest - November 30, 2019

We’ve got to regulate the application of AI — not the tech itself


Another important factor that governments and businesses will need to be aware of will be in devising methods to prevent the rise of AI used with malicious intent, i.e. for hacking or fraudulent sales. Most cyber-experts predict that cyberattacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application. Stringent qualification processes will also need to be addressed for certain industries. For example, Broadway show producers have been driving ticket sales through an automated chatbot, with the show Wicked boasting ROI increases of up to 700 percent. This has also allowed producers to sell tickets for 20 percent higher than the average weekly price.  Regulations will need to address the fact that AI and bots have the potential to take advantage of consumers’ wallets, which means that policymakers will need to work closely with firms that are gradually beginning to rely on chatbots to make sure that consumer rights are not being breached.



How Smart Home Tech Is Shaking Up The Insurance Industry

Ring video doorbell
Through smart home devices, homeowners are able to remain connected to their property 24/7, whether at home, work or on holiday. In turn, this constant connectivity instils a psychological shift in householders, encouraging them to take a more proactive approach to home security and protection. ... For example, while water damage may not top the list of worries from homeowners, it can cost thousands of pounds to repair and is one of the most common types of domestic property damage claims. However, with a leak sensor installed, escaping water can be caught quickly and customers will even be alerted via a notification to their smartphone. This knowledge is critical, as homeowners are able to call out a plumber on the same day – at a fixed fee – and contain the damage. This proactivity benefits both sides. For insurers, responsible and safe homeowners pose less of a risk, resulting in lower premiums. It’s a win win all round. Moreover, the additional information gained from the steady stream of signals sent to the insurer from in-home sensors and monitors can allow claim handlers to remain better informed in the event of an incident.


Fintech Regulation Needs More Principles, Not More Rules


It is important to recognize that principles-based regulation is not a euphemism for “deregulation” or a “light-touch” approach—far from it. Principles-based regulation is a different way of achieving the same regulatory outcomes as rules-based regulation. But it simply does so in what is, in many cases, a more efficient and flexible manner. That flexibility also prevents subversion of those outcomes through the kind of loopholes that revealed the inherent vulnerability of rules-based regulation in the run up to the financial crisis. Of course, in practice, it is rare for to have either a purely principles-based or a purely rules-based regulation. Rather, they represent two ends of the regulatory spectrum. Every principles-based regulatory regime has some rules, and every rules-based regime has some element of principle. For this reason, we frequently see hybrid regulatory systems of principles and rules.


Singapore wants widespread AI use in smart nation drive


"Domestically, our private and public sectors will use AI decisively to generate economic gains and improve lives. Internationally, Singapore will be recognised as a global hub in innovating, piloting, test-bedding, deploying and scaling AI solutions for impact," said the SNDGO, which is part of the Prime Minister's Office. To kick off its efforts, the government identified five national projects that focused on key industry challenges, including intelligent freight planning in transport and logistics, chronic disease prediction and management in healthcare, and border clearance operations in national safety and security. These form part of nine sectors that have been earmarked for heightened deployment as AI is expected to generate high social and economic value for Singapore. These verticals include manufacturing, finance, cybersecurity, and government. The national AI strategy also outlined five key enablers that the government deemed essential in building a "vibrant and sustainable" ecosystem for AI innovation and adoption. A robust data architecture, for instance, would be necessary for the public and private sectors to manage and exchange information securely, so AI algorithms can have access to quality datasets for training and testing.


How To Thrive At Work: 10 Strategies Based On Brain Science

Brain science can help you thrive at work
In his book, The Shallows, Nicholas Carr demonstrates how our internet usage has rewired our brains. We think superficially, skimming, glancing and scanning rather than reading or processing more deeply. Cal Newport, in his book Deep Work, advocates for focusing, contemplating and concentrating. His contention is this distraction-free thinking has become increasingly rare and is a skill we must learn (or relearn). In fact, empathy—so critical to our humanity—is impossible without deeply considering others’ situations. And the ability to solve problems and develop ideas cannot happen effectively without depth of thought. Tell stories. While communicating facts tends to engage limited portions of the brain, hearing a story engages multiple parts of the brain. One study in particular, using an MRI found participants had greater understanding and retention of concepts based on the engagement of multiple parts of the brain. Other researchers, including Dr. Paul Zak, have demonstrated hearing stories that include conflicts and meaningful characters tend to engage us emotionally. The resulting release of oxytocin leads us to trust the messages and morals the story is trying to convey.


3 Reasons This Stock Is a Top Cybersecurity Pick

Hacker in a hoodie sitting with a laptop.
Check Point's research and development expenses increased 20% year over year while selling and marketing expenses rose nearly 10.5%. Both of these metrics outpaced the company's actual revenue growth. In fact, Check Point has stepped up its investment in both of these line items in the past year or so, and the positive impact is visible on the company's subscription growth. The company is now looking to get into lucrative cybersecurity niches as well. Check Point recently announced the acquisition of Internet of Things (IoT)-focused cybersecurity start-up Cymplify. Check Point will integrate Cymplify's expertise into its Infinity cybersecurity architecture so that clients can protect their IoT devices -- such as smart TVs, medical devices, and IP cameras -- against cyberattacks. This should open up a big growth opportunity for Check Point because according to IHS Markit, cybersecurity is the fastest-growing IoT niche. The firm predicts that the IoT data security market will grow from $3 billion in revenue this year to $7 billion in 2022 as more original equipment manufacturers (OEMs) move to secure their IoT devices.


5G radiation no worse than microwaves or baby monitors: Australian telcos

5g-towers-20180623205641.jpg
"When we've done our tests on our 5G network, they're typically 1,000 to 10,000 times less than what we get from other devices. So when you add all of that up together, it's all very low in terms of total emission. But you're finding that 5G is in fact a lot lower than many other devices we use in our everyday lives." Wood added there is no evidence for cancer or non-thermal effects from radio frequency EME. "There's some evidence for biological effects, but none of these are non-adverse," Wood told the committee. "So they've really looked at all of the research they need to set a safety standard, and in summary what they said is that, if you follow the guidelines, they're protective of all people, including children." On the issue of governmental revenue raising from its upcoming spectrum sale, Optus said it would be wrong of government to view it as a cash cow, as every dollar spent on spectrum is not used on creating networks. "Critically, in order to achieve the coverage and deployment required, 5G networks will require significant amounts of spectrum," the Singaporean-owned telco wrote.


How can businesses stop AI from going bad?

How can businesses stop AI from going bad? image
Starting from the very beginning of the process, CIO’s can help AI be “good” by ensuring that the data being used to create the algorithms is ethical and unbiased, itself. Gathering and using data from ethical sources significantly reduces the risk of harbouring toxic datasets which may infect systems with problematic biases further down the line. This is especially crucial for highly regulated industries, which will need to identify biases already present and remedy accordingly. Using insurance as an example, CIO’s should take care not to include data that heavily features one particular demographic, gender etc., which might augment averages and inform non-representative policies. Collecting a rich sample of ethical, GDPR compliant, representative data from consenting customers actually benefits the accuracy of the AI it powers, and it also reduces the work needed to “clean” it.


INNOPHYS Develops Muscle Suit for Physical Labor

INNOPHYS Develops Muscle Suit for Physical Labor Japanese Woman Carrying Load Crop
The suit can lift upwards of 30kg. While it won’t do the lifting on its own, it can take that weight off from its wearer. It offers support in the form of hydraulically-controlled artificial muscles which are housed in an aluminum backpack linked to the waist joints. The pack provides two axes of movement: one for bending at the waist and another for supporting the thighs. Controlling the suit can be done in two ways. The wearer can either blow into a tube or touch a control surface with their chin, thus creating a hands-free control system for the exoskeleton. The muscle suit is wrapped inside a custom, water-repellent bag. This protects the device from the elements and gives it a softer appearance. ... Many other Japanese companies have also taken the challenge of producing suits to assist in physical labor. Companies like HAL have already placed a stable foothold in the exoskeleton industry with their series of robotic suits. Nevertheless, the Muscle Suit is an awe-inspiring invention by this venture company from the Tokyo University of Science.



Yes—at least in some circumstances, both researchers said. Bordes’s group, for example, is creating a benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes. And Rossi said that, in some cases, A.I. could be used to highlight potential bias in models created by other artificial intelligence algorithms. While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment. “Addressing this issue is really a process,” Rossi told me. “When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice ... can bring unconscious bias.” You can read more about our discussion and watch a video here. ... “Yes, it is true that A.I. is only as good as the data it has been fed,” she said. But, she argued, this potentially gave people tremendous power.



Quote for the day:


"Whenever you see a successful business, someone once made a courageous decision." -- Peter F. Drucker