Showing posts with label digital certificate. Show all posts
Showing posts with label digital certificate. Show all posts

Daly Tech Digest - August 20, 2025


Quote for the day:

"Real difficulties can be overcome; it is only the imaginary ones that are unconquerable." -- Theodore N. Vail


Asian Orgs Shift Cybersecurity Requirements to Suppliers

Cybersecurity audits need to move away from a yearly or quarterly exercise to continuous evaluation, says Security Scorecard's Cobb. As part of that, organizations should look to work with their suppliers to build a relationship that can help both companies be more resilient, he says. "Maybe you do an on-site visit or maybe you do a specific evidence gathering with that supplier, especially if they're a critical supplier based on their grade," Cobb says. "That security rating is a great first step for assessment, and it also will lead into further discussions with that supplier around what things can you do better." And yes, artificial intelligence (AI) is making inroads into monitoring third-party risk profiles as well. Consultancy EY imagines a future where multiple automated agents track information about suppliers and when an event — whether cyber, geopolitical, or meteorological — affects one or more supply chains, will automatically develop plans to mitigate the risk. Pointing out the repeated supply chain shocks from the pandemic, geopolitics, and climate change, EY argues that an automated system is necessary to keep up. When a chemical spill or a cybersecurity breach affects a supplier in Southeast Asia, for example, the system would track the news, predict the impact on a company's supply, and suggest alternate sources, if needed, the EY report stated.


The successes and challenges of AI agents

To really get the benefits, businesses will need to redesign the way work is done. The agent should be placed at the center of the task, with people stepping in only when human judgment is required. There is also the issue of trust. If the agent is only giving suggestions, a person can check the results. But when the agent acts directly, the risks are higher. This is where safety rules, testing systems, and clear records become important. Right now, these systems are still being built. One unexpected problem is that agents often think they are done when they are not. Humans know when a task is finished. Agents sometimes miss that. ... Today, the real barrier goes beyond just technology. It is also how people think about agents. Some overestimate what they can do; others are hesitant to try them. The truth lies in the middle. Agents are strong with goal-based and repeatable tasks. They are not ready to replace deep human thinking yet. ... Still, the direction is clear. In the next two years, agents will become normal in customer support and software development. Writing code, checking it, and merging it will become faster. Agents will handle more of these steps with less need for back-and-forth. As this grows, companies may create new roles to manage agents, needing someone to track how they are used, make sure they follow rules, and measure how much value they bring. This role could be as common as a data officer in the future.


How To Prepare Your Platform For Agentic Commerce

APIs and MCP servers are inherently more agent-friendly but less ubiquitous than websites. They expose services in a structured, scalable way that's perfect for agent consumption. The tradeoff is that you must find a way to allow verified agents to get access to your APIs. This is where some payment processing protocols can help by allowing verified agents to get access credentials that leverage your existing authentication, rate-limiting and abuse-prevention mechanisms to ensure access doesn’t lead to spam or scraping. In many cases, the best path is a hybrid approach: Expand your existing website to allow agent-compatible access and checkout while building key capabilities for agent access via APIs or MCP servers. ... Agents work best with standardized checkouts instead of needing to dodge botblockers and captchas while filling out forms via screenscraping. They need an entirely programmatic checkout process. That means you must move beyond more brittle browser autofill and instead accept tokenized payments directly via API. These tokens can carry pre-authorized payment methods such as tokenized credit cards, digital wallets (e.g., Apple Pay and PayPal), stablecoins or on-chain assets and account-to-account transfers. When combined with identity tokens, these payment tokens allow agents to present a complete, scoped credential that you can inspect and charge instantly. Think Stripe Checkout but for AI.


AI agents alone can’t be trusted in verification

One of the biggest risks comes from what’s known as compounding errors. Even a very accurate AI system – for example, 95% – becomes far less reliable when it’s chained to a series of compounding and related decisions. By the fifth hypothetical step, accuracy would drop to 77% or less. Unlike human teams, these systems don’t raise flags or signal uncertainty. That’s what makes them so risky: when they fail, they tend to do so silently and exponentially. ... This opacity is particularly dangerous in the fight against fraud, which is only getting more advanced. In 2025, fraudsters aren’t using fake passports and bad Photoshop. They’re using AI-generated identities, videos, and documents that are nearly impossible to distinguish from the real thing. Tools like Google’s Veo 3 or open-source image generators allow anyone to produce high-quality synthetic content at scale. ... Responsible and effective use of AI means using multiple models to cross-check results to avoid the domino effect of one error feeding into the next. It means assigning human reviewers to the most sensitive or high-risk cases – especially when fraud tactics evolve faster than models can be retrained. And it means having clear escalation procedures and full audit trails that can stand up to regulatory scrutiny. This hybrid model offers the best of both worlds: the speed and scale of AI, combined with the judgment and flexibility of human experts. As fraud becomes more sophisticated, this balance will be essential. 


AI in the classroom is important for real-world skills, college professors say

The agents can flag unsupported claims in students’ writing and explain why evidence is needed and recommend the use of credible sources, Luke Behnke, vice president of product management at Grammarly, said in an interview. “Colleges recognize it’s their responsibility to prepare students for the workforce, and that now includes AI literacy,” Behnke said. Universities are also implementing AI in their own learning management systems and providing students and staff access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT. ... Cuo asks students not to simply accept whatever results advanced genAI models spit out, as they may be riddled with factual errors and hallucinations. “Students need to select and read more by themselves to create something that people don’t recognize as an AI product,” Cuo said. Some professors are trying to mitigate AI use by altering coursework and assignments, while others prefer not to use it at all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at Ohio University. But students have different requirements and use AI tools for personalized learning, collaboration, and writing, as well as for coursework workflow, Shovlin said. He stressed, however, that ethical considerations, rhetorical awareness, and transparency remain important in demonstrating appropriate use.


Automation Alert Sounds as Certificates Set to Expire Faster

Decreasing the validity time for a certificate offers multiple benefits. As previous certificate revocations have demonstrated, actually revoking every bad certificate in a timely manner, across the broad ecosystem, is a challenge. Having certificates simply expire more frequently helps address that. The CA/Browser Forum also expects an ancillary benefit of "increased consistency of quality, stability and availability of certificate lifecycle management components which enable automated issuance, replacement and rotation of certificates." While such automation won't fix every ill, the forum said that "it certainly helps." ... When it comes to getting the so-called cryptographic agility needed to manage both of those requirements, many organizations say they're not yet there. "While awareness is high, execution is lagging," says a new study from market researcher Omdia. "Many organizations know they need to act but lack clear roadmaps or the internal alignment to do so." ... For managing the much shorter certificate renewal timeframe, only 19% of surveyed organizations say they're "very prepared," with 40% saying they're somewhat prepared and another 40% saying they're not very prepared, and so far continue to rely on manual processes. "Historically, organizations have been able to get by with poor certificate hygiene because cryptography was largely static," said Tim Callan


AI Data Centers Are Coming for Your Land, Water and Power

"Think of them as AI factories." But as data centers grow in size and number, often drastically changing the landscape around them, questions are looming: What are the impacts on the neighborhoods and towns where they're being built? Do they help the local economy or put a dangerous strain on the electric grid and the environment? ... As fast as the AI companies are moving, they want to be able to move even faster. Smith, in that Commerce Committee hearing, lamented that the US government needed to "streamline the federal permitting process to accelerate growth." ... Even as big tech companies invest heavily in AI, they also continue to promote their sustainability goals. Amazon, for example, aims to reach net-zero carbon emissions by 2040. Google has the same goal but states it plans to reach it 10 years earlier, by 2030. With AI's rapid advancement, experts no longer know if those climate goals are attainable, and carbon emissions are still rising. "Wanting to grow your AI at that speed and at the same time meet your climate goals are not compatible," Good says. For its Louisiana data center, Meta has "pledged to match its electricity use with 100% clean and renewable energy" and plans to "restore more water than it consumes," the Louisiana Economic Development statement reads.


Slow and Steady Security: Lessons from the Tortoise and the Hare

In security, it seems that we are constantly confronted by the next shiny object, item du jour, and/or overhyped topic. Along with this seems to come an endless supply of “experts” ready to instill fear in us around the “revolutionized threat landscape” and the “new reality” we apparently now find ourselves in and must come to terms with. Indeed, there is certainly no shortage of distractions in our field. Some of us are likely aware of and conscious of the near-constant tendency for distraction in our field. So how can we avoid falling into the trap of succumbing to the temptation and running after every distraction that comes along? Or, to pose it another way, how can we appropriately invest our time and resources in areas where we are likely to see value and return on that investment? ... All successful security teams are governed by a solid security strategy. While the strategy can be adjusted from time to time as risks and threats evolve, it shouldn’t drift wildly and certainly not in an instant. If the newest thing demands radically altering the security strategy, it’s an indicator that it may be overblown. The good news is that a well-formed security strategy can be adapted to deal with just about anything new that arises in a steady and systematic way, provided that new thing is real.


IBM and Google say scalable quantum computers could arrive this decade

Most notable advances come from qubits built with superconducting circuits, as used in IBM and Google machines. These systems must operate near absolute zero and are notoriously hard to control. Other approaches use trapped ions, neutral atoms, or photons as qubits. While these approaches offer greater inherent stability, scaling up and integrating large numbers of qubits remains a formidable practical challenge. "The costs and technical challenges of trying to scale will probably show which are more practical," said Sebastian Weidt, chief executive at Universal Quantum, a startup developing trapped ions. Weidt emphasized that government support in the coming years could play a decisive role in determining which quantum technologies prove viable, ultimately limiting the field to a handful of companies capable of bringing a system to full scale. Widespread interest in quantum computing is attracting attention from both investors and government agencies. ... These next-generation technologies are still in their early stages, though proponents argue they could eventually surpass today's quantum machines. For now, industry leaders continue refining and scaling legacy architectures developed over years of lab research.


The 6 challenges your business will face in implementing MLSecOps

ML models are often “black boxes”, even to their creators, so there’s little visibility into how they arrive at answers. For security pros, this means limited ability to audit or verify behavior – traditionally a key aspect of cybersecurity. There are ways to circumnavigate this opacity of AI and ML systems: with Trusted Execution Environments (TEEs). These are secure enclaves in which organizations can test models repeatedly in a controlled ecosystem, creating attestation data. ... Models are not static and are shaped by the data they ingest. Thus, data poisoning is a constant threat for ML models that need to be retrained. Organizations must embed automated checks into the training process to enforce a continuously secure pipeline of data. Using information from the TEE and guidelines on how models should behave, AI and ML models can be assessed for integrity and accuracy each time they are given new information. ... Risk assessment frameworks that work for traditional software will not be applicable to the changeable nature of AI and ML programs. Traditional assessments fail to account for tradeoffs specific to ML, e.g., accuracy vs fairness, security vs explainability, or transparency vs efficiency. To navigate this difficulty, businesses must be evaluating models on a case-by-case basis, looking to their mission, use case and context to weigh their risks. 

Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - August 01, 2024

These are the skills you need to get hired in tech

While soft skills are important, communicating them to a prospective employer can present a conundrum. Tina Wang, division vice president of human resources at ADP, said there are a few ways for job seekers to bring attention to their behavioral skills. It goes beyond just listing “strong work ethic” or “problem solving” on a resume, “though it’s good to add it there too,” she said. Job seekers can incorporate behavior skills in a track record of job experiences. ... An interview with a prospective employer is also a good time to introduce behavioral skills, but time is limited and job-seekers won’t likely be able to share all their demonstrated skills and experience. “Preparation will go a long way, so think through your talking points and what is important to share,” Wang said. “Think about a few applicable, real work experiences where you demonstrated these skills and sketch out how and when to bring them during the interview process.” References can also be an excellent way to highlight behavioral skills. Intangibles such as a strong work ethic or attention to detail might be something former managers, team members or peers identify. 


Ideal authentication solution boils down to using best tools to stop attacks

Given the shifting nature of work, with more employees working remotely, the variety of gaps in protection is manifold. Clunky authentication experiences mean users are often asked to sign in multiple times a day for different applications and accounts. “Users get extremely frustrated when this occurs, and they end up having resistance to adopting these authentication methods,” Anderson says. To improve the situation, organizations need to manage authentication scenarios in onboarding, session tokens to remember login – and the reality of username and password authentication still being used extensively throughout the security landscape, leaving vulnerabilities to fraud. “Passkeys are good for users because they simplify and streamline the actual authentication ceremony itself, where the user is actively involved,” Miller says. “It doesn’t necessarily decrease the number of times they have to authenticate but it does make it simpler and less taxing.” “They also have knock-on benefits of reducing the amount of information that leaks in the case of a database leak that can be used by an attacker. It shrinks the blast radius of account compromise.”


Should Today’s Developers Be More or Less Specialized?

“The need for specialists is not going to change. If anything, I expect it to increase,” says Hillion. “We still have a number of clients who rely on full-stack developers. I would say the general trend is towards businesses needing more specialized developers who have the right combination of technical skillsets and sector knowledge to deliver what is needed into the complex tech stack. There is significant demand for developers who specialize in particular industry sectors.” ... “Without basic knowledge, pursuing any specific development area is challenging,” says Ivanov. “That’s why starting by mastering basic technologies that someone is most proficient in, which helps them learn new things faster,” says Ivanov in an email interview. “However, core technologies should not be the end goal. It is also essential to stay up to date with technology trends and always continue using new technology.” Tasks that go beyond standard or general requirements need the involvement of specialists who have knowledge and experience in specific areas. For example, a project that requires complex algorithms or specific technologies will require a specialist with a deep understanding of them.


Between sustainability and risk: why CIOs are considering small language models

“In LLMs, the bulk of the data work is done statistically and then IT trains the model on specific topics to correct errors, giving it targeted quality data,” he says. “SLMs cost much less and require less data, but, precisely for this reason, the statistical calculation is less effective and, therefore, very high-quality data is needed, with substantial work by data scientists. Otherwise, with generic data, the model risks producing many errors.” Furthermore, SLMs are so promising and interesting for companies that even big tech offers and advertises them, like Google’s Gemma and Microsoft’s Phi-3. For this reason, according to Esposito, governance remains fundamental, within a model that should remain a closed system. “An SLM is easier to manage and becomes an important asset for the company in order to extract added value from AI,” he says. “Otherwise, with large models and open systems, you have to agree to share strategic company information with Google, Microsoft, and OpenAI. This is why I prefer to work with a system integrator that can develop customizations and provide a closed system, for internal use. 


Why geographical diversity is critical to build effective and safe AI tools

Geographical diversity is critical as organizations look to develop AI tools that can be adopted worldwide, according to Andrea Phua, senior director of the national AI group and director of the digital economy office at Singapore's Ministry of Digital Development and Information (MDDI). ... "The use of Gen AI has brought a new dimension to cyber threats. As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it," said CSA's chief executive and Commissioner of Cybersecurity David Koh. "As it is, AI already poses a formidable challenge for governments around the world [and] cybersecurity professionals would know that we are merely scratching the surface of gen AI's potential, both for legitimate applications and malicious uses," Koh said. He pointed to reports of AI-generated content, including deepfakes in video clips and memes, that have been used to sow discord and influence the outcome of national elections. At the same time, there are new opportunities for AI to be tapped to enhance cyber resilience and defense, he said. 


Cloud Migration Regrets: Should You Repatriate?

With increasing pressure to cut costs, many CTOs and CIOs are considering repatriating cloud workloads back on premises. As hard as it may seem, it’s important to think beyond just the cost. You must understand workload requirements to make sound decisions for each application. ... A lot of organizations have forgotten how much IT operations have changed since moving to the cloud. Cloud transformation meant revamping ITOps based on the chosen mix of Infrastructure-, Platform- or Software-as-a-Service (IaaS, PaaS or SaaS) services. Bringing applications back on premises strips away those service layers, and Ops teams may no longer be able or willing to accept the administrative and maintenance burden again. One final consideration before moving workloads off the cloud is security. I think security is one of the many advantages of cloud infrastructure. When businesses first started moving to the cloud, security was one of the biggest concerns. It turns out that cloud providers are better at security than you are. They can’t fix security holes in your software or other operator error scenarios, but a cloud infrastructure provides greater isolation if a breach does occur. 


Chess, AI & future of leadership

As computing power increases and its access cost reduces, AI will become the central force that drives all activities, including imagination! So, imagine the chessboard being AI-enabled. The board now has its intelligence with the ability to understand the context of the game to prompt the next set of moves. The difference between the board-level AI and the AI used by the player as her assistant is that the assistant knows the player’s psyche of defending or attacking, strengths and weaknesses of the player and her opponent, and factors these while offering suggestions. The two AIs may or may not be aligned in their suggestions since both may be accessing different references. Let’s activate the third dimension in chess – the pieces are also intelligent! They know their roles and those of the others. They too can think, strategise, and suggest. For instance, in a choice to move between the rook and the knight, the rook suggests the knight moves. The knight feels the Queen should move! This is the egalitarian version of chess! Does it feel real and practical? In the context of AI, there’s the Large Language Model, which processes data from a vast set of sources with a large number of constraints and rules. 


DigiCert validation bug sets up 83,267 SSL certs for revoking

One of the validation methods approved by the Certification Authority Browser Forum (CABF), whose guidelines provide best practices for securing internet transactions in browsers and other software, involves the customer adding a DNS CNAME record that includes a random value supplied by its certificate provider. The provider, in this case DigiCert, then does a DNS lookup and verifies that the random value is as provided, confirming that the customer controls the domain. The CABF requires that, in one format of the DNS CNAME entry, the random value be prefixed with an underscore, and DigiCert discovered that, in some cases, that character was not included, rendering the validation non-compliant. By CABF rules, those certificates must be revoked within 24 hours, with no exceptions. However, DigiCert said in an update to its status page Tuesday, and in an email to customers, “Unfortunately, some customers operating critical infrastructure are not in a position to have all their certificates reissued and deployed in time without critical service interruptions. To avoid disruption to critical services, we have engaged with browser representatives alongside these customers over the last several hours. ...”


Mind the Gap: Data Quality Is Not “Fit for Purpose”

When talking about data quality, we must therefore be clear about whose purpose, what requirements, established when, and by whom. Within the context of the DMBoK definition, the answer is that every consumer evaluates the quality of a data set independently. Data is considered to be of high quality when it is fit for my purpose, satisfies my requirements, established by me when I need the data. Data quality, defined in this way, is truly in the eye of the beholder. Furthermore, data quality analyses cannot be leveraged by new consumers. For decades, we in decision support have been selling the benefits of leveraging data across applications and analyses. It has been the fundamental justification for data warehouses, data lakes, data lakehouses, etc. But misalignment between the purpose for which data was created and the purpose for which it is being used may not be immediately apparent. Especially when the data is not well understood. The consequences are faulty models and erroneous analyses. We reflexively blame the quality of the data, but that’s not where the problem lies. This is not data quality. It is data fitness. 


Navigating Hope and Fear in a Socio-Technical Future

It is not about just spending more, that isnt really working, you must SPEND BETTER. I and other architects litterally train for decades to both cut costs and make great investment decisions. Technical debt acrual, technical health goals, technical strategy dont just deserve a seat at the table. They are becoming the table. A little more rationally, in all complex engineering fields, we are required to get signoff from legitimate professionals who have been measured against legitimate and hard-earned competencies. Not only does this create more stable outcomes, it actually saves and makes the economy money. Instead of ‘paying for two ok systems’, we pay for ‘one great one’. ... In all complex engineering ecosystems it is not just outputs and companies that are regulated. The role and skills of architects and engineers are not secret and they really aren’t that different by company. I believe I am the worlds expert on architecture skills or at least one of a dozen of them. I have interviewed and assessed hundreds of companies, and thousands of architects. It is time to begin licensing. And it must be handed to a real professional society. It cannot be a vendor consortium. 



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie

Daily Tech Digest - July 15, 2024

For databases that need to hold and query hundreds of petabytes to exabytes of data, there are only a few viable choices. Be very careful about evaluating and planning exabyte-scale databases, as they are huge commitments and hard to change once the data is loaded. ... Data consistency is usually “strong” for SQL databases, meaning that all reads return the latest data. Data consistency may be anything from “eventual” to “strong” for NoSQL databases. Eventual consistency offers lower latency, at the risk of reading stale data. Consistency is the “C” in the ACID properties required for validity in the event of errors, network partitions, and power failures. The four ACID properties are Atomicity, Consistency, Isolation, and Durability. ... Some databases are faster at reads and queries, and others are faster at writes. The mix of reads and writes you expect from your application is a useful number to include in your database selection criteria, and can guide your benchmarking efforts. The optimum choice of index type differs between read-heavy applications and write-heavy applications.


Can we please stop talking about replacing employees with AI?

With mundane, repetitive tasks taken over by AI, your team will have more room for creative, human work. Shift your organizational structure and job descriptions to open up room for the strategic, relationship-oriented work that AI just can’t do as well as a human. With less on employees’ to do lists, leaders and managers can shift their focus to challenging and developing them in other more valuable areas, such as building relationships and soft skills. AI can help here, too, but not in the way you may think. Rather than replacing humans, it can unlock insights (from data most companies already have) that lead to hiring and nurturing the right humans for the job. Software used to identify the important soft skills that lead to success in specific roles can be extremely valuable for both employers and employees. Ultimately, improvements in job performance and satisfaction will boost morale and your bottom line — a win-win. ... A good leader knows that for any organization to survive, people cannot be replaced. AI should instead help make employees exponentially more valuable to their companies by spending more time on creative productivity, and for companies to become more valuable to employees by improving their lives.


The Complexity of Solving Performance Problems

Our brains work in two different modes, particularly as we’re solving problems: intuitive and analytical. Intuitive is the mode that our brain is in when we’re not actively and effortfully thinking about something. It’s automatic. Analytical is the mode where we’re actually applying effort and working diligently in a focused manner in our brains. ... Nobody ever says, “Come fix my system,” without telling you what the problem is, right? They might just tell you that it’s slow, but there’s almost always something along with that. Maybe “It’s slow and I think there’s an IO problem” or “It’s slow and it looks like we’re pegging the CPU between two and three o’clock in the morning.” There’s always a little bit of information. That information gives us the ability to start theorizing early. That’s the main characteristic of this approach. ... With the methodical approach, we’re basically going dogmatically through the steps outlined above. And the key characteristic of this one is that we have the mental discipline to postpone that theorizing step until after we’ve gathered the data.


Liquid cooling solutions for high-density computing applications

One of the major challenges when designing data centers is anticipating end-of-life options and accurately predicting the nominal heat load from the beginning. This unpredictability often leads to a disconnect between projected and actual heat demands. For instance, a system designed for 1MW might only require 150kW in reality. Liquid cooling intensifies this challenge due to fluctuating cooling demands; as servers process large amounts of data, energy demand, and heat rejection peak, then drop drastically when idle. ... Liquid cooling offers significant sustainability gains and surpasses traditional air cooling methods in efficiency. By capturing heat directly at its source, liquid cooling allows operation at higher temperatures within the cooling loop. This facilitates the use of free cooling techniques, leveraging ambient air for cooling without resorting to additional energy-intensive processes. Moreover, the captured heat can be reused, further enhancing energy efficiency and sustainability.


What to do if certificates for passive authentication fail

If you have just one source and it becomes unavailable, your entire authentication process can be compromised. A wise solution would be to use a combination of sources to ensure higher service availability. At the government-to-government level, countries share certificates for passive authentication with each other through diplomatic channels, thus collecting their own database of trusted certificates. ... No matter which sources you trust to get certificates and master lists from—be it a central repository or each publisher directly—the responsibility remains yours. Organizations like the ICAO or BSI may facilitate the process, but they are not liable if something goes wrong. Also, understanding the certificate system and its expiration dates is crucial. You need to set up alerts for approaching expiration dates to ensure you have a fresh certificate ready to replace the old one. The most responsible organizations don’t rely solely on external sources. They collect certificates from various countries, create their own master list, sign it with their own certificate, and ensure that it’s valid.


Ignoring quantum threats in CBDC design is reckless

Relying on quantum mechanics phenomena, it far outpaces the ability of classical computers and supercomputers to solve complex algorithms in both speed and capability. Public-key cryptography is one of its primary targets. Thus, it’s alarming that developers of CBDCs are not prioritizing quantum-resistant designs in their early technical roadmaps. As RAND mathematician Alvin Moon noted, “A steady stream of advances in quantum computing technology is undeniably increasing the urgency to standardize and implement post-quantum algorithms, or algorithms which are resistant to both present-day computing and on-the-horizon quantum computing.” This security flaw represents a serious vulnerability in the US monetary system if central bank digital currencies are eventually adopted by the Federal Reserve and authorized by Congress. Valid concerns about surveillance and potential harm to civil liberties are driving debates and may positively influence laws, regulations and ethical governance standards. Technical designs continue to evolve, albeit slowly, to address privacy risks. However, without secure and resilient technical rails, CBDC adoption will remain a serious risk.


A CISO's Summary Of The Cyber Resilience Act

As a baseline, you need to make sure that whatever you’re building is as secure as is reasonable to be. Your product must have minimal attack surfaces and should be hardened. This means that by default it should not allow unauthorized access; its data is encrypted or protected; and its data and commands can’t be intercepted or manipulated. It also means that your product must keep working, even under a DoS attack, and that it mustn’t interrupt other devices, even when attacked with exploits. Additionally, your product needs to be able to monitor or log changes in the device, as well as receive security updates or rollbacks. This includes direct or remote updates, user notifications about updates and the ability to roll back updates or reset the product to a factory/default state. On top of that, you’ll also need to follow or provide documentation and information about your product, including a description of the design, development and vulnerability handling process. This would also include a formal SBOM, where you’d include more information on the components in your product and its potential vulnerabilities.


The Compelling Need for Low-Code in Legacy Application Modernization

With technology at the centre of India’s transformation initiatives, it is imperative that its base digital infrastructure undergoes transformation if the nation hopes to achieve its goals. However, in this quest to become a digital leader lies a huge challenge: legacy systems. Even today, in many organisations, legacy systems are still used to run daily processes and are a huge hurdle in the path of digital transformation. Legacy systems not only consume a significant portion of IT budgets but also make businesses incapable of adapting to fast-changing customer needs. ... There is another compelling method that has gained huge traction in recent times. Low-code visual software development, which helps developers accelerate the software development process using a friendly and intuitive drag-and-drop process. The low-code development approach offers organisations the ability to modernise their core systems without the cost and risk of traditional development approaches. Let us look at some of the common legacy systems that are prevalent in most organisations today and understand how low-code platforms can be used effectively. 


How To Establish A Culture Of Cyber-Resilience Within An Organization

Workers ought to be aware of how their activities affect the security posture of the company and should feel confident enough to serve as the first line of defense. Additionally essential is routinely upgrading training programs according to the most recent dangerous environment. Organizations may greatly lower their susceptibility to cyberattacks by encouraging a culture of ongoing education and awareness. ... Incorporate cybersecurity into all company operations to ensure consideration of security issues during each undertaking and decision-making process. This interconnectedness allows each department to understand its responsibility in safeguarding the company’s digital assets, leading to a holistic approach to security. One essential first step is to create security rules and processes that support company goals. Making sure cybersecurity needs are part of the planning and development phases of a project facilitates early risk identification and the application of suitable safeguards. ... More than simply putting security technology into place, an organization must develop a cyber-resilience culture. 


Mastering Long-Running Processes in Modern Architectures

It is important to note that long-running processes refer to the act of waiting, not algorithms running for extended periods. This waiting can be due to human actions, external responses, or intentionally allowing time to pass. These processes can take hours, days, weeks, or even longer. ... Besides business reasons for waiting, there are also technical ones, such as asynchronous communication delays, failures in message delivery, and the unavailability of peer services in distributed systems. If not addressed, these issues can lead to cascading failures. ... As soon as you start handling the complexities of distributed systems and long-running processes in payment processing, it is essential to embrace asynchronous communication in your API design. Although payments are usually quick and straightforward, situations like declined credit cards or unavailable services demand a different approach. We can build more adaptable and robust payment systems by designing systems that can handle immediate and delayed responses and using signals like a 202 HTTP code to indicate that processing will continue in the background.



Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous

Daily Tech Digest - March 22, 2021

Bitcoin’s Greatest Feature Is Also Its Existential Threat

The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain? What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to? In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations. This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works. Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users.


India’s Quest Towards Quantum Supremacy

The digital partnership between the Indian Institute of Science Education and Research (IISER) at Pune and Finland’s Aalto University has created a high probability of getting its first quantum computer. ... Talking about the partnership, Neeta Bhushan, the joint secretary (Central Europe), external affairs ministry, stated that the idea of jointly developing a quantum computer with the use of AI and 5G technology is an important area of collaboration for both countries. Considering that Nokia and other Finnish companies are leading the world in mobile technology growth, this digital collaboration will witness the two countries collaborating on quantum technologies and computing. Hence, the partnership will have the leverage to deploy the latest technologies available with both countries. ... The partnership can lead us towards a new ecosystem altogether, and many things can be expected out of the same. The post-COVID changes in global power-sharing and the recent technological developments to handle the crisis have brought India to the centre stage. Consequently, quantum encryption is one of the basic applications derived from this collaboration.


Remote working still isn't perfect. These are the things that need fixing

A new report from O2 Business explores these insights in greater depth. The UK mobile operator surveyed 2,099 workers who had previously been office-based to understand how their needs and expectations of work had changed. It found that the majority of employees welcomed the notion of splitting their time between the office and home-working going forward, but also called for a closer alignment of operations, IT and HR in order to support individual work choices and maximize workplace productivity. Generally, employees are satisfied with their organization's response to the pandemic, O2 found: 69% of workers felt that their employers had supported them during the pandemic, with just 11% disagreeing with this statement. But less than two-thirds (65%) of employees felt confident that their organization was prepared for the future world of work. O2 said this indicated some businesses would struggle to adapt to the more flexible working arrangements that many are planning to adopt post-pandemic. The mad scramble to remote working has been one of the most trying aspects for businesses over the past year.


Fight microservices complexity with low-code development

A low-code platform takes care of nearly everything that conventionally is coded for an application. Most of the low-level programming and integration work is taken care of via tool configurations, which saves developers a lot of time and headaches. However, think carefully about where you apply low-code in a microservices architecture. As long as the app is simple, clean and doesn't require many integration points, low-code development might be the right alternative to more manual and complex microservices projects. Low-code builds are an easy choice for applications that don't need to integrate with other databases or only rely on a series of small tables. Short-lived conference apps or marketing promotions that run with user ID information are good examples of this. However, a low-code approach does not replace large-scale microservices development. Once you need to share information between applications in real time, the tools and programming techniques involved become much more sophisticated. While the low-code approach helps developers steer clear of over-engineering apps that don't need it, low-code likely won't provide the database integration, messaging or customization capabilities needed for an enterprise-level microservices architecture.


Edge Computing Growth Drives New Cybersecurity Concerns

Effectively protecting the edge means understanding how cybersecurity protection schemas work in an enterprise that uses not only edge computing, but also the cloud and traditional resources. Most enterprises are clearly focused on data security and application security, and are using tools such as web application firewalls (WAF), runtime application self-protection (RASP), data exfiltration protection and, of course, endpoint protection. Since the edge has the ability to “touch” data and applications, as well as use identity to connect and determine entitlements, a great deal of potentially sensitive information passes through the edge. Much, if not all of that traffic moves through a content delivery network (CDN), where hosts provide the connectivity and, hopefully, wrap encryption around that traffic to protect it from interception. However, intrusion and data exfiltration still happens. “Digital transformation is driving more and more applications to the edge, and with that movement, businesses are losing visibility into what is actually happening on the network, especially where edge operation occurs,” Hathaway said. “Gaining visibility allows cybersecurity professionals to get a better understanding of what is actually happening at the edge,” he said.


Move Your Automation Efforts From Pilot To Reality

Talent is another crucial part of the equation that not enough customers take into account. I’ve worked with many customers that don’t have dedicated automation centers of excellence, or specific in-house expertise to tackle automation the right way. An enterprise with multiple technologies in place must ensure that those technologies are communicating with each other. By bringing together technical experts, your processes can be better visualized and monitored end-to-end across the organization, leading to a higher chance of success. The complexity and effort involved in this kind of endeavour can be off-putting, but it’s worth the reward. Nor is it truly as complicated as it sounds — execution management systems, for example, already bring together technologies like process mining, automation and AI into a seamless, intelligent execution layer. Bring in or train the right people to champion it, and you’ve got a headstart on the next step of the journey. So while many companies haven’t been able to bring the full promise of automation to bear at scale just yet, that promise is getting closer to becoming a reality every day.


HowTo: Optimize Certificate Management to Identify and Control Risk

End-to-end certificate management gives businesses complete visibility and lifecycle control over any certificate in their environment, helping them reduce risk and control operational costs. Even in the most complex enterprise environments, certificate automation offers speed, flexibility and scale. Full visibility over all digital certificates and keys means that even the largest enterprises can have a centralized view of digital identities and security processes. Security leaders can then access expiration dates and maintain cryptographic strength while avoiding the time-consuming, demanding, and risky task of manually discovering, supervising, and renewing certificates. As organizations continue to grow and evolve, so does the range of certificates deployed and the set of people deploying them, which increases the potential for certificates to be installed in your environment that are out of sight of IT security teams and left unmanaged. To avoid being blindsided by these “rogue” certificates, enterprises are turning toward automated universal discovery.


On the Road to Good Cloud Security: Are We There Yet?

The research also uncovered a disconnect that raises the question: Is that confidence misplaced? When asked to rate the level of visibility the security team had into their organization's use of specific cloud service types, including software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS), that same level of confidence faltered. For example, when asked to rate the security team's level of visibility into their organization's SaaS usage on a five-point scale, with 1 being the highest level, only 18% gave it a 1 and 27% gave it a 2. Visibility into PaaS and IaaS was rated as only slightly better. At the same time, respondents' knowledge of the shared responsibility model was found to be lacking. When asked to indicate whether the customer or cloud provider was responsible for securing a list of seven different elements that make up an IaaS account, around half of respondents gave the wrong answer. Specifically, 63% erroneously indicated that the cloud provider was responsible for securing virtual network connections, 55% erroneously indicated that the cloud provider was responsible for securing applications, and 50% got it wrong when they said the cloud provider was responsible for securing users who were accessing cloud data and applications.


5 AI-for-Industry Myths Debunked

Up until, and during, the AI hype in the nineties, artificial intelligence was a scientific discipline that almost exclusively dealt with data and algorithms. Over the past decades however, the field has matured, and AI has become an integral part of automated decisioning systems that are at the heart of what we do as individuals and organizations. Consequently, a large portion of AI research, development, and implementation encompasses people and processes. I remember having a business conversation with a large energy provider in which we were talking about automated systems and data-driven methods that, driven by customer data and smart meters, could enhance their customers’ experience. One hour into the meeting, they suddenly asked: “This all looks very promising, but shouldn’t we also do something with AI?” ... If you have the combined luck and skills, you can probably cook a decent meal with ingredients that come from a randomly filled refrigerator. The real question, however, is: “What do you want to achieve?” In the example of the refrigerator, it might occasionally be an effective solution if you need to quickly fill stomachs and don’t have time to go shopping. 


Cloudflare wants to be your corporate network backbone

With Magic WAN, Cloudflare aims to simplify that. Cloudflare's global Anycast network is already built for high performance and availability to serve its core CDN business. The company has data centers in more than 200 cities across over 100 countries with local peering at internet exchange points. Regardless of where branch offices or employees are located, chances are high they'll always connect to a server close to them and then the traffic will be routed through Cloudflare's private network efficiently benefiting from its performance optimizations, smart routing and security. With Magic WAN organizations only need to set up Anycast GRE tunnels from their offices or datacenters to Cloudflare and they can then define their private networks and routing rules in a central dashboard. Cloudflare's existing Argo Tunnel, Network Interconnect and soon IPsec can also be used to connect datacenters and VPCs to its network, while roaming employees will connect using Cloudflare WARP, a secure tunneling solution that's built around the highly performant Wireguard VPN protocol. This also solves the scalability and performance issues that organizations have faced with traditional VPN gateways and concentrators when they were suddenly faced with a large remote workforce due to the pandemic.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - November 05, 2020

Deep Neural Networks Help to Explain Living Brains

Artificial neural networks are built with interconnecting components called perceptrons, which are simplified digital models of biological neurons. The networks have at least two layers of perceptrons, one for the input layer and one for the output. Sandwich one or more “hidden” layers between the input and the output and you get a “deep” neural network; the greater the number of hidden layers, the deeper the network. Deep nets can be trained to pick out patterns in data, such as patterns representing the images of cats or dogs. Training involves using an algorithm to iteratively adjust the strength of the connections between the perceptrons, so that the network learns to associate a given input (the pixels of an image) with the correct label (cat or dog). Once trained, the deep net should ideally be able to classify an input it hasn’t seen before. In their general structure and function, deep nets aspire loosely to emulate brains, in which the adjusted strengths of connections between neurons reflect learned associations. Neuroscientists have often pointed out important limitations in that comparison: Individual neurons may process information more extensively than “dumb” perceptrons do, for example, and deep nets frequently depend on a kind of communication between perceptrons called back-propagation that does not seem to occur in nervous systems.


Future of Corporate Governance Through Blockchain-powered Smart Companies

In essence, Smart Company is an entirely new form of business type (LTD., IBC) except it rivals all traditional models by being fully automated by blockchain. And certainly, it makes just that big of a difference. When you have the ability to run your business in a structure that is legally compliant yet all its transactions happen in real-time and are verified directly on the blockchain, this changes the game. What this means for business owners is that managerial ownership structures become more transparent. Corporate voting is easier, more accurate and secret strategies such as ‘empty voting’ become more difficult to execute. The ability to have corporate shares as ERC-20 tokens modified for security laws offers the means to assert and transfer ownership and liabilities of real-world assets with actual value. Just to give you a rough understanding of the magnitude of this untapped potential, it has been estimated that the total value of illiquid assets, including real estate, gold., is no less than $11 Trillion . Roughly the nominal GDP of China, the world’s second largest economy today. For shareholders, Smart Company model offers nearly free trading and transparency in ownership records while simultaneously showing real-time transfers of shares from one owner to another. 



Agile development: How to tackle complexity and get stuff done

Holt believes his key role as CTO is to create a culture in the organisation where his people feel comfortable and confident to try new things. Rather than being scared of risk-taking, he says tech leaders should encourage their IT professionals to innovate and develop customer-centred products and services in an iterative manner. "Those are the kind of people who aren't afraid of the complexity, who are able get in amongst it, and that's where you get really good solutions," he says. Holt says engaging with a challenge involes great teamwork. He says his organisation is always on the lookout for people who have an ability to manage complexity and the solution often involves agility in organisational culture as well as product development. ... Danny Attias, chief digital and information officer at British charity Anthony Nolan, says tech executives looking to deal with complexity must ensure they're working to create a joined-up organisation. More often than not, that means using Agile principles to break down problems into small parts that can be managed effectively across the organisation. "My career has been about decoupling dependencies wherever you possibly can," he says.


The world needs women who code

A lot of women are not aware of the power of IT. The industry’s reputation as a boy’s club belies the fact that women are actually rising in many technology fields, both in number and in title. They may think they have to already know a bunch of code to get started. It's likely that many women simply don’t realize how much opportunity there is for them, even as beginners. A slightly different, yet related, reason is fear. Because of the percentage of men in this field, some women may feel that there will be too much competition, that they won’t be able to measure up against men with experience, or that they'll be overlooked for men without experience. But nowadays, IT companies are making strong efforts to welcome and support women, conducting various programs to encourage women to learn about various tech disciplines, and provide pathways for them to join the industry. And whenever a woman joins this industry, it gives a boost of confidence to other women too. I constantly get inspired by the many women I know that are doing amazing things in tech. ... Admittedly, coding can seem overwhelming in the beginning, but don’t worry—it’s like that for almost everyone. Soon enough, what seems like gibberish at first starts to come together, and you learn to harness it to make things work and accomplish tasks. 


Kafka at the Edge — Use Cases and Architectures

Event streaming with Apache Kafka at the edge is not cutting edge anymore. It is a common approach to providing the same open, flexible, and scalable architecture at the edge as in the cloud or data center. Possible locations for a Kafka edge deployment include retail stores, cell towers, trains, small factories, restaurants, etc. I already discussed the concepts and architectures in detail in the past: "Apache Kafka is the New Black at the Edge" and "Architecture patterns for distributed, hybrid, edge and global Apache Kafka deployments". This blog post is an add-on focusing on use cases across industries for Kafka at the edge. To be clear before you read on: Edge is NOT a data center. And "Edge Kafka" is not simply yet another IoT project using Kafka in a remote location. Edge Kafka is actually an essential component of a streaming nervous system that spans IoT (or OT in Industrial IoT) and non-IoT (traditional data-center/cloud infrastructures). The post's focus is scenarios where the Kafka clients AND the Kafka brokers are running on the edge. This enables edge processing, integration, decoupling, low latency, and cost-efficient data processing. Some IoT projects are built like “normal Kafka projects”, i.e., built in the (edge) data center or cloud. 


How smartphones became IoT’s best friend and worst enemy

Relying on the ubiquity of smartphones and the rise of remote controls, users and vendors alike have embraced the move away from physical device interfaces. This evolution in the IoT ecosystem, however, brings major benefits AND serious drawbacks. While users enjoy the remote capabilities of companion apps and vendors bypass the need for hardware interfaces, studies show that they present serious cybersecurity risks. For example, the communication between an IoT device and its app is often not properly encrypted nor authenticated – and these issues enable the construction of exploits to achieve remote control of victim’s devices. It is important to explain that connected devices have not always been this way. I’m sure others like myself do not need to cast their minds far back to remember a time when smartphones did not even exist. User input during these halcyon days relied on physical interfaces on the device itself, interfaces that typically consisted of basic touch screens or two-line LCD displays. Though functional, these physical interfaces were certainly limited (and limiting) when compared to the applications that superseded them. Devices without physical interfaces are smaller, consume less power, and look better. 


Singapore government rolls out digital signature service

Called Sign with SingPass, the service is being rolled out by Assurity, a subsidiary of the Government Technology Agency (GovTech), together with eight digital signing application providers, including DocuSign, Adobe and Kofax. GovTech said each digital signature is identifiable and cryptographically linked to the signer, while signed documents are platform agnostic and can be viewed with the user’s preferred system. No document data will be transferred during the digital signing process. Assurity will also issue digital certificates for signatures created under the service. Upon Assurity’s accreditation under Singapore’s Electronic Transactions Act, signatures made with the service will be regarded as secure electronic signatures. GovTech said the service will be useful for organisations and their customers amid the growing number of online transactions and will test the service with the Singapore Land Authority (SLA) for the digital signing of property caveats in the coming weeks. Kok Ping Soon, chief executive of GovTech, said the high security document signing service will help businesses save cost and manpower by alleviating the need to manually verify physical paperwork.


Is your approach to data protection more expensive than useful?

With the recent increase in cyberattacks and exponential data growth, protecting data has become job one for many IT organizations. And in many cases, their biggest hurdle is managing an aging backup infrastructure with limited resources. Tight budgets should not discourage business leaders from modernizing data protection. Organizations that hang on to older backup technology don't have the tools they need to face today's threats. Rigid, siloed infrastructures aren't agile or scalable enough to keep up with fluctuations in data requirements, and they are based on an equally rigid backup approach. Traditional backup systems behave like insurance policies, locking data away until it's needed. That's like keeping an extra car battery in the garage, waiting for a possible crisis. The backup battery might seem like a reasonable preventive measure, but most of the time, it's a waste of space. And if the crisis never arises, it's an unnecessary upfront investment, making it more expensive than useful. In the age of COVID-19, where cash is king and on-site resources are particularly limited, some IT departments are postponing data protection modernization, looking to simplify overall operations and lower infrastructure costs first. That plan can block a company's progress. 


Taking Control of Confusing Cloud Costs

It’s difficult to compare services across multiple clouds, because each provider uses different terminology. What Azure calls a ‘virtual machine’ is called a ‘virtual machine instance’ on GCP and just an ‘instance’ on AWS. A group of these instances would be called ‘autoscaling groups’ on both Amazon and GCP, but Scale Sets on Azure. It’s hard to even keep up with what it is you’re purchasing and whether there is even an alternative cloud comparable service, as the naming convention is different. As outlined above in regards to the simple web application using Lambda, it would be very time consuming for someone to compare what it would cost to host a web application in one cloud versus another. It would take technical knowledge of each cloud provider to be able to translate how you could comparably host it with one set of services against another before you even got into prices. Cloud pricing uses an on-demand model, which is a far cry from on-prem, where you could deploy things and leave them running 24/7 without affecting the cost (bar energy). In the cloud, everything is based on the amount of time you use it, either on a per hour, per minute, per request, per amount or per second basis.


Five ways to avoid digital transformation fatigue

Change fatigue stems from uncertainty and a lack of clarity around the strategic intent and implementation of the program. Too often, digitalisation and new tools are being taken into the company without proper project planning and thinking about how the benefits will be explained to the employees. Have a deep-dive into the thinking of the value proposition narrative before the new digital tool is implemented. Start by finding out if the management and leadership teams are aligned on the transformation's strategic intent and outcomes. If not, then you need to go back to the drawing board. This should ideally map out clear target business outcomes as well as the impact of the transformation to the people, processes, and tools of what’s happening and how it will affect them. Many workers might feel that they should be doing their 'actual job' instead of learning how to navigate with something that they are not sure will benefit them. Be ready to present to each role the necessities of the new tool, and avoid explaining it so that it sounds like the company is the only one that will benefit from it. Incentives for the employees should be clearly stated before the change starts.



Quote for the day:

"Don't just see what others do to you. Also see what you do to others." -- The Golden Mirror