Showing posts with label technical architecture. Show all posts
Showing posts with label technical architecture. Show all posts

Daily Tech Digest - February 22, 2026


Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James



The data center gold rush is warping reality

The real impact isn’t people—it’s power, land, transmission capacity, and water. When you drop 10 massive facilities into a small grid, demand spikes don’t just happen inside the fence line. They ripple outward. Utilities must upgrade substations, reinforce transmission lines, procure new-generation equipment, and finance these investments. ... Here’s the part we don’t say out loud often enough: High-tech companies are spending massive amounts of money on data centers because the market rewards them for doing so. Capital expenditures have become a kind of corporate signaling mechanism. On earnings calls, “We’re investing aggressively” has become synonymous with “We’re winning,” even when the investment is built on forecasts that are, at best, optimistic and, at worst, indistinguishable from wishful thinking. ... The bet is straightforward: When demand spikes, prices and utilization rise, and those who built first make bank. Build the capacity, fill the capacity, charge a premium for the scarce resource, and ride the next decade of digital expansion. It’s the same playbook we’ve seen before in other infrastructure booms, except this time the infrastructure is made of silicon and electrons, and the pitch is wrapped in the language of transformation. ... Then there’s the cost reality. AI systems, especially those that deliver meaningful, production-grade outcomes, often cost five to ten times as much as traditional systems once you account for compute, data movement, storage, tools, and the people required to run them responsibly.


Chip-processing method could assist cryptography schemes to keep data secure

Just like each person has unique fingerprints, every CMOS chip has a distinctive “fingerprint” caused by tiny, random manufacturing variations. Engineers can leverage this unforgeable ID for authentication, to safeguard a device from attackers trying to steal private data. But these cryptographic schemes typically require secret information about a chip’s fingerprint to be stored on a third-party server. This creates security vulnerabilities and requires additional memory and computation. ... “The biggest advantage of this security method is that we don’t need to store any information. All the secrets will always remain safe inside the silicon. This can give a higher level of security. As long as you have this digital key, you can always unlock the door,” says Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this security method. ... A chip’s PUF can be used to provide security just like the human fingerprint identification system on a laptop or door panel. For authentication, a server sends a request to the device, which responds with a secret key based on its unique physical structure. If the key matches an expected value, the server authenticates the device. But the PUF authentication data must be registered and stored in a server for access later, creating a potential security vulnerability.


What MCP Can and Cannot Do for Project Managers Today

The most mature MCPs for PM are official connectors from the platforms themselves. Atlassian’s Rovo MCP Server connects Jira and Confluence, generally available since late 2025. Wrike has its own MCP server for real-time work management. Dart exposes task creation, updates, and querying through MCP. ClickUp does not have an official MCP server, but multiple community implementations wrap its API for task management, comments, docs, and time tracking. ... Most PM work is human and stays human. No LLM replaces the conversation where you talk a frustrated team member through a scope change, or the negotiation where you push back on an unrealistic deadline from the sponsor. No LLM runs a planning workshop or navigates the politics of resource allocation. But woven through all of that is documentation. Every conversation, every decision, every planning session produces written output. The charter that captures what was agreed. ... Beyond documentation, scheduling is where I expected MCP to add the most computational value. This is where the investigation got interesting. Every PM builds schedules. The standard method is CPM: define tasks, set dependencies, estimate durations, calculate the critical path. MS Project does this. Primavera does this. A spreadsheet with formulas does this. CPM is well understood and universally used. CPM does exactly what it says: it calculates the critical path given dependencies and durations. 


How to Write a Good Spec for AI Agents

Instead of overengineering upfront, begin with a clear goal statement and a few core requirements. Treat this as a “product brief” and let the agent generate a more elaborate spec from it. This leverages the AI’s strength in elaboration while you maintain control of the direction. This works well unless you already feel you have very specific technical requirements that must be met from the start. ... Many developers using a strong model do exactly this. The spec file persists between sessions, anchoring the AI whenever work resumes on the project. This mitigates the forgetfulness that can happen when the conversation history gets too long or when you have to restart an agent. It’s akin to how one would use a product requirements document (PRD) in a team: a reference that everyone (human or AI) can consult to stay on track. ... Treat specs as “executable artifacts” tied to version control and CI/CD. The GitHub Spec Kit uses a four-phase gated workflow that makes your specification the center of your engineering process. Instead of writing a spec and setting it aside, the spec drives the implementation, checklists, and task breakdowns. Your primary role is to steer; the coding agent does the bulk of the writing. ... Experienced AI engineers have learned that trying to stuff the entire project into a single prompt or agent message is a recipe for confusion. Not only do you risk hitting token limits; you also risk the model losing focus due to the “curse of instructions”—too many directives causing it to follow none of them well. 


NIST’s Quantum Breakthrough: Single Photons Produced on a Chip

The arrival of quantum computing is future, but the threat is current. Commercial and federal organizations need to protect against quantum computing decryption now. Various new mathematical approaches have been developed for PQC, but while they may be theoretically secure, they are not provably secure. Ultimately, the only provably secure key distribution must be based on physics rather than math. ... While this basic approach is secure, it is neither efficient nor cheap. “Quantum key distribution is an expensive solution for people that have really sensitive information,” continues Bruggeman. “So, think military primarily, and some government agencies where nuclear weapons and national security are involved.” Current implementations tend to use available dark fiber that still has leasing costs. ... “The big advance from NIST is they are able to provide single photons at a time, as opposed to sending multiple photons,” continues Bruggeman. Single photons aren’t new, but in the past, they’ve usually been photons in a stream of photons. “So, they encode the key information on those strings, and that leads to replication. And in cryptography, you don’t want to have replication of data.” There is currently a comfort level in this redundancy, since if one photon in the stream fails, the next one might succeed. But NIST has separately developed Superconducting Nanowire Single-Photon Detectors (SNSPDs) which would allow single photons to be reliably sent and received over longer distances – up to 600 miles.


Quantum security is turning into a supply chain problem

The core issue is timing. Sensitive supplier and contract data has a long shelf life, and adversaries have already started collecting encrypted traffic for future decryption. This is the “harvest now, decrypt later” model, where encrypted records are stolen and stored until quantum computing becomes capable of breaking current public-key encryption. That creates a practical security problem for cybersecurity teams supporting procurement, third-party risk, and supply chain operations. ... There’s growing pressure to adopt post-quantum cryptography (PQC), including partner expectations, insurance scrutiny, and regulatory direction. It argues that PQC adoption is increasingly being driven through procurement requirements, especially from large enterprises and public-sector organizations. Vendors without a PQC roadmap may face longer audits or disqualification during sourcing decisions. ... Beyond cryptographic threats, the researchers argue that quantum computing may eventually improve supply chain risk management by addressing complex optimization problems that overwhelm classical systems. It describes supply chain risk as a “wicked problem,” where variables shift continuously and disruptions propagate in unpredictable ways. ... Quantum readiness spans both cybersecurity and supply chain management. For cybersecurity professionals, the near-term work focuses on long-term encryption durability across vendor ecosystems, along with cryptographic migration planning and third-party dependencies.


CEOs aren't seeing any AI productivity gains, yet some tech industry leaders are still convinced AI will destroy white collar work within two years

Most companies are yet to record any AI productivity gains despite widespread adoption of the technology. That's according to a massive survey by the US National Bureau of Economic Research (NBER), which asked 6,000 executives from a range of firms across the US, UK, Germany, and Australia how they use AI. The study found 70% of companies actively use AI, but the picture is different among execs themselves. Among top executives – including CFOs and CEOs – a quarter don't use the technology at all, while two-thirds say they use it for 1.5 hours a week at most. ... "The most commonly cited uses are ‘text generation using large language models’ followed by ‘visual content creation’ and ‘data processing using machine learning’," the survey added. When it comes to employment savings, 90% of execs said they'd seen no impact from AI over the last three years, with 89% saying they saw no productivity boost, either. The report noted that previous studies have found large productivity gains in specific settings – in particular customer support and writing tasks. ... Despite the lack of impact to date, business leaders still predict AI will start to boost productivity and reduce the number of employees needed in the coming years. Respondents predict a 1.4% productivity boost and 0.8% increase in output thanks to the technology over the next three years, for example. Yet the NBER survey also reveals a "sizable gap in expectations", with senior execs saying AI would cut employment by 0.7% over the next three years — which the report said would mean 1.75 million fewer jobs. 


Observability Without Cost Telemetry Is Broken Engineering

Cost isn't an operational afterthought. It's a signal as essential as CPU saturation or memory pressure, yet we've architected it out of the feedback loop engineers actually use. ... Engineers started evaluating architectural choices through a cost lens without needing MBA training. “Should we cache this aggressively?” became answerable with data: cache infrastructure costs $X/month, API calls saved cost $Y/month, net impact is measurable, not theoretical.  ... The anti-pattern I see most often is siloed visibility. Finance gets billing dashboards. SREs get operational dashboards. Developers get APM traces. Nobody sees the intersection where cost and performance influence each other. You debug a performance issue — say, slow database queries. The fix is to add an index. Query time drops from 800 ms to 40 ms. Victory. Except the database is now using 30% more storage for that index, and your storage tier bills by the gigabyte-month. If you're on a flat-rate hosting plan, maybe that cost is absorbed. If you're on Aurora or Cosmos DB with per-IOPS pricing, you've just traded latency for dollars. Without cost telemetry, you won't notice until the bill arrives. ... Alerting without cost dimensions misses failure modes. Your error rate is fine. Latency is stable. But egress costs just doubled because a misconfigured service is downloading the same 200 GB dataset on every request instead of caching it.


A New Way To Read the “Unreadable” Qubit Could Transform Quantum Technology

“Our work is pioneering because we demonstrate that we can access the information stored in Majorana qubits using a new technique called quantum capacitance,” continues the scientist, who explains that this technique “acts as a global probe sensitive to the overall state of the system.” ... To better understand this achievement, Aguado explains that topological qubits are “like safe boxes for quantum information,” only that, instead of storing data in a specific location, “they distribute it non-locally across a pair of special states, known as Majorana zero modes.” That unusual structure is what makes them attractive for quantum computing. “They are inherently robust against local noise that produces decoherence, since to corrupt the information, a failure would have to affect the system globally.” In other words, small disturbances are unlikely to disrupt the stored information. Yet this strength has also created a major experimental challenge. As Aguado notes, “this same virtue had become their experimental Achilles’ heel: how do you “read” or “detect” a property that doesn’t reside at any specific point?.”  ... The project brings together an advanced experimental platform developed primarily at Delft University of Technology and theoretical work carried out by ICMM-CSIC. According to the authors, this theoretical input was “crucial for understanding this highly sophisticated experiment,” highlighting the importance of close collaboration between theory and experiment in pushing quantum technology forward.


When Excellent Technology Architecture Fails to Deliver Business Results

Industry research consistently shows that most large-scale transformations fail to achieve their expected business outcomes, even when the underlying technology decisions are considered sound. This suggests that the issue is not technical quality. It is structural. ... The real divergence begins later, in day-to-day decision-making. Under delivery pressure, teams make choices driven by deadlines, budget constraints, and individual accountability. Temporary workarounds are accepted. Deviations are justified as exceptions. Risks are taken implicitly rather than explicitly assessed. Architecture is often aware of these decisions, but it is not structurally embedded in the moment where choices are made. As a result, architecture remains correct, but unused.  ... When architecture cannot explain the economic and operational consequences of a decision, it loses relevance. Statements such as “this violates architectural principles” carry little weight if they are not translated into impact on cost of change, delivery speed, or operational risk. ... What is critical is that these compromises are rarely tracked, assessed cumulatively, or reintroduced into management discussions. Architecture may be aware of them, but without a mechanism to record and govern them, their impact remains invisible until flexibility is lost and change becomes expensive. Architecture debt, in this sense, is not a technical failure. It is a governance outcome. When decision trade-offs remain unmanaged, architecture is blamed for consequences it was never empowered to influence.

Daily Tech Digest - July 05, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


The Hidden Data Cost: Why Developer Soft Skills Matter More Than You Think

The logic is simple but under-discussed: developers who struggle to communicate with product owners, translate goals into architecture, or anticipate system-wide tradeoffs are more likely to build the wrong thing, need more rework, or get stuck in cycles of iteration that waste time and resources. These are not theoretical risks, they’re quantifiable cost drivers. According to Lumenalta’s findings, organizations that invest in well-rounded senior developers, including soft skill development, see fewer errors, faster time to delivery, and stronger alignment between technical execution and business value. ... The irony? Most organizations already have technically proficient talent in-house. What they lack is the environment to develop those skills that drive high-impact outcomes. Senior developers who think like “chess masters”—a term Lumenalta uses for those who anticipate several moves ahead—can drastically reduce a project’s TCO by mentoring junior talent, catching architecture risks early, and building systems that adapt rather than break under pressure. ... As AI reshapes every layer of tech, developers who can bridge business goals and algorithmic capabilities will become increasingly valuable. It’s not just about knowing how to fine-tune a model, it’s about knowing when not to.


Why AV is an overlooked cybersecurity risk

As cyber attackers become more sophisticated, they’re shifting their attention to overlooked entry points like AV infrastructure. A good example is YouTuber Jim Browning’s infiltration of a scam call center, where he used unsecured CCTV systems to monitor and expose criminals in real time. This highlights the potential for AV vulnerabilities to be exploited for intelligence gathering. To counter these risks, organizations must adopt a more proactive approach. Simulated social engineering and phishing attacks can help assess user awareness and expose vulnerabilities in behavior. These simulations should be backed by ongoing training that equips staff to recognize manipulation tactics and understand the value of security hygiene. ... To mitigate the risks posed by vulnerable AV systems, organizations should take a proactive and layered approach to security. This includes regularly updating device firmware and underlying software packages, which are often left outdated even when new versions are available. Strong password policies should be enforced, particularly on devices running webservers, with security practices aligned to standards like the OWASP Top 10. Physical access to AV infrastructure must also be tightly controlled to prevent unauthorized LAN connections. 


EU Presses for Quantum-Safe Encryption by 2030 as Risks Grow

The push comes amid growing concern about the long-term viability of conventional encryption techniques. Current security protocols rely on complex mathematical problems — such as factoring large numbers — that would take today’s classical computers thousands of years to solve. But quantum computers could potentially crack these systems in a fraction of the time, opening the door to what cybersecurity experts refer to as “store now, decrypt later” attacks. In these attacks, hackers collect encrypted data today with the intention of breaking the encryption once quantum technology matures. Germany’s Federal Office for Information Security (BSI) estimates that conventional encryption could remain secure for another 10 to 20 years in the absence of sudden breakthroughs, The Munich Eye reports. Europol has echoed that forecast, suggesting a 15-year window before current systems might be compromised. While the timeline is uncertain, European authorities agree that proactive planning is essential. PQC is designed to resist attacks from both classical and quantum computers by using algorithms based on different kinds of hard mathematical problems. These newer algorithms are more complex and require different computational strategies than those used in today’s standards like RSA and ECC. 


MongoDB Doubles Down on India's Database Boom

Chawla says MongoDB is helping Indian enterprises move beyond legacy systems through two distinct approaches. "The first one is when customers decide to build a completely new modern application, gradually sunsetting the old legacy application," he explains. "We work closely with them to build these modern systems." ... Despite this fast-paced growth, Chawla points out several lingering myths in India. "A lot of customers still haven't realised that if you want to build a modern application especially one that's AI-driven you can't build it on a relational structure," he explains. "Most of the data today is unstructured and messy. So you need a database that can scale, can handle different types of data, and support modern workloads." ... Even those trying to move away from traditional databases often fall into the trap of viewing PostgreSQL as a modern alternative. "PostgreSQL is still relational in nature. It has the same row-and-column limitations and scalability issues." He also adds that if companies want to build a future-proof application especially one that infuses AI capabilities they need something that can handle all data types and offers native support for features like full-text search, hybrid search, and vector search. Other NoSQL players such as Redis and Apache Cassandra also have significant traction in India.


AI only works if the infrastructure is right

The successful implementation of artificial intelligence is therefore closely linked to the underlying infrastructure. But how you define that AI infrastructure is open to debate. An AI infrastructure always consists of different components, which is clearly reflected in the diverse backgrounds of the participating parties. As a customer, how can you best assess such an AI infrastructure? ... For companies looking to get started with AI infrastructure, a phased approach is crucial. Start small with a pilot, clearly define what you want to achieve, and expand step by step. The infrastructure must grow with the ambitions, not the other way around. A practical approach must be based on the objectives. Then the software, middleware, and hardware will be available. For virtually every use case, you can choose from the necessary and desired components. ... At the same time, the AI landscape requires a high degree of flexibility. Technological developments are rapid, models change, and business requirements can shift from quarter to quarter. It is therefore essential to establish an infrastructure that is not only scalable but also adaptable to new insights or shifting objectives. Consider the possibility of dynamically scaling computing capacity up or down, compressing models where necessary, and deploying tooling that adapts to the requirements of the use case. 


Software abstraction: The missing link in commercially viable quantum computing

Quantum Infrastructure Software delivers this essential abstraction, turning bare-metal QPUs into useful devices, much the way data center providers integrate virtualization software for their conventional systems. Current offerings cover all of the functions typically associated with the classical BIOS up through virtual machine Hypervisors, extending to developer tools at the application level. Software-driven abstraction of quantum complexity away from the end users lets anyone, irrespective of their quantum expertise, leverage quantum computing for the problems that matter most to them. ... With a finely tuned quantum computer accessible, a user must still execute many tasks to extract useful answers from the QPU, in analogy with the need for careful memory management required to gain practical acceleration with GPUs. Most importantly, in executing a real workload, they must convert high-level “assembly-language” logical definitions of quantum applications into hardware-specific “machine-language” instructions that account for the details of the QPU in use, and deploy countermeasures where errors might leak in. These are typically tasks that can only be handled by (expensive!) specialists in quantum-device operation.


Guest Post: Why AI Regulation Won’t Work for Quantum

Artificial intelligence regulation has been in the regulatory spotlight for the past seven to ten years and there is no shortage of governments and global institutions, as well as corporations and think tanks, putting forth regulatory frameworks in response to this widely buzzy tech. AI makes decisions in a “black box,” creating a need for “explainability” in order to fully understand how determinations by these systems affect the public. With the democratization of AI systems, there is the potential for bad actors to create harm in a decentralized ecosystem. ... Because quantum systems do not learn on their own, evolve over time, or make decisions based on training data, they do not pose the same kind of existential or social threats that AI does. Whereas the implications of quantum breakthroughs will no doubt be profound, especially in cryptography, defense, drug development, and material science, the core risks are tied to who controls the technology and for what purpose. Regulating who controls technology and ensuring bad actors are disincentivized from using technology in harmful ways is the stuff of traditional regulation across many sectors, so regulating quantum should prove somewhat less challenging than current AI regulatory debates would suggest.


Validation is an Increasingly Critical Element of Cloud Security

Security engineers simply don’t have the time or resources to familiarize themselves with the vast number of cloud services available today. In the past, security engineers primarily needed to understand Windows and Linux internals, Active Directory (AD) domain basics, networks and some databases and storage solutions. Today, they need to be familiar with hundreds of cloud services, from virtual machines (VMs) to serverless functions and containers at different levels of abstraction. ... It’s also important to note that cloud environments are particularly susceptible to misconfigurations. Security teams often primarily focus on assessing the performance of their preventative security controls, searching for weaknesses in their ability to detect attack activity. But this overlooks the danger posed by misconfigurations, which are not caused by bad code, software bugs, or malicious activity. That means they don’t fall within the definition of “vulnerabilities” that organizations typically test for—but they still pose a significant danger.  ... Securing the cloud isn’t just about having the right solutions in place — it’s about determining whether they are functioning correctly. But it’s also about making sure attackers don’t have other, less obvious ways into your network.


Build and Deploy Scalable Technical Architecture a Bit Easier

A critical challenge when transforming proof-of-concept systems into production-ready architecture is balancing rapid development with future scalability. At one organization, I inherited a monolithic Python application that was initially built as a lead distribution system. The prototype performed adequately in controlled environments but struggled when processing real-world address data, which, by their nature, contain inconsistencies and edge cases. ... Database performance often becomes the primary bottleneck in scaling systems. Domain-Driven Design (DDD) has proven particularly valuable for creating loosely coupled microservices, with its strategic phase ensuring that the design architecture properly encapsulates business capabilities, and the tactical phase allowing the creation of domain models using effective design patterns. ... For systems with data retention policies, table partitioning proved particularly effective, turning one table into several while maintaining the appearance of a single table to the application. This allowed us to implement retention simply by dropping entire partition tables rather than performing targeted deletions, which prevented database bloat. These optimizations reduced average query times from seconds to milliseconds, enabling support for much higher user loads on the same infrastructure.


What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

The narrative that constraints kill innovation is both lazy and false. In cybersecurity, we’ve seen the opposite. Federal mandates like the Federal Information Security Modernization Act (FISMA), which forced agencies to map their systems, rate data risks, and monitor security continuously, and state-level laws like California’s data breach notification statute created the pressure and incentives that moved security from afterthought to design priority.  ... The irony is that the people who build AI, like their cybersecurity peers, are more than capable of innovating within meaningful boundaries. We’ve both worked alongside engineers and product leaders in government and industry who rise to meet constraints as creative challenges. They want clear rules, not endless ambiguity. They want the chance to build secure, equitable, high-performing systems — not just fast ones. The real risk isn’t that smart policy will stifle the next breakthrough. The real risk is that our failure to govern in real time will lock in systems that are flawed by design and unfit for purpose. Cybersecurity found its footing by designing for uncertainty and codifying best practices into adaptable standards. AI can do the same if we stop pretending that the absence of rules is a virtue.

Daily Tech Digest - September 29, 2021

Approaching Anomaly Detection in Transactional Data

Usually, people mean financial transactions when they talk about transactional data. However, according to Wikipedia, “Transactional Data is data describing an event (the change as a result of a transaction) and is usually described with verbs. Transaction data always has a time dimension, a numerical value and refers to one or more objects”. In this article, we will use data on requests made to a server (internet traffic data) as an example, but the considered approaches can be applied to most of the datasets falling under the aforementioned definition of transactional data. Anomaly Detection, in simple words, is finding data points that shouldn’t normally occur in a system that generated data. Anomaly detection in transactional data has many applications, here are a couple of examples: Fraud detection in financial transactions; Fault detection in manufacturing; Attack or malfunction detection in a computer network (the case covered in this article); Recommendation of predictive maintenance; and Health condition monitoring and alerting.


Apache Kafka: Core Concepts and Use Cases

The initial point that each and every individual who works with streaming applications ought to comprehend is the concept, which is a diminutive piece of data. For instance, when a user registers within the system, an event is created. You can likewise ponder on an event like a message with data, which can be processed and saved at a certain place if at all required. This event is the message wherein the data regarding details such as the user’s name, email, password, and so forth can be added. This highlights that Kafka is the platform that works well when it comes to streaming events. Events are continually composed by producers. They are called producers since they compose events or data to Kafka. There are numerous sorts of producers. Instances of clients include web servers, parts of applications, whole applications, IoT gadgets, monitoring specialists, and so on. A new user registration event can be produced by the component of the site that is liable for client registrations. 


How to Build a Regression Testing Strategy for Agile Teams

Regression testing is a process of testing the software and analyzing whether the change of code, update, or improvements of the application has not affected the software’s existing functionality. Regression testing in software engineering ensures the overall stability and functionality of existing features of the software. Regression testing ensures that the overall system stays sustainable under continuous improvements whenever new features are added to the code to update the software. Regression testing helps target and reduce the risk of code dependencies, defects, and malfunction, so the previously developed and tested code stays operational after the modification. Generally, the software undergoes many tests before the new changes integrate into the main development branch of the code. ... Automated regression testing is mainly used with medium and large complex projects when the project is stable. Using a thorough plan, automated regression testing helps to reduce the time and efforts that a tester spends on tedious and repeatable tasks and can contribute their time that requires manual attention like exploratory tests and UX testing.


Sam Newman on Information Hiding, Ubiquitous Language, UI Decomposition and Building Microservices

The ubiquitous language in many ways is the key stone of domain-driven design and it's amazing how many people skip it, and it's foundational. I think a lot of the reason that people skip ubiquitous language is because to understand what terms and terminology are used by the business side of your organization by the use of your software, it involves having to talk to people. It still stuns me how many enterprise architects have come up with a domain model by themselves without ever having spoken to anybody outside of IT. So this fundamentally, the ubiquitous language starts with having conversations. This is why I like event storming as a domain-driven design technique because it places primacy on having that kind of collective brainstorming activity where you get sort of maybe your non-developer, your non-technical stakeholders in the room and listen to what they're talking about and you're picking up their terms, their terminology, and you're trying to put those terms into your code.


Technical architecture: What IT does for a living

Technical architecture is the sum and substance of what IT deploys to support the enterprise. As such, its management is a key IT practice. We talked about how to go about it in a previous article in this series. Which leads to the question, What constitutes good technical architecture? Or more foundationally, What constitutes technical architecture, whether good, bad, or indifferent? In case you’re a purist, we’re talking about technical architecture, not enterprise architecture. The latter includes the business architecture as well as the technical architecture. Not that it’s possible to evaluate the technical architecture without understanding how well it supports the business architecture. It’s just that managing the health of the business architecture is Someone Else’s Problem. IT always has a technical architecture. In some organizations it’s deliberate, the result of processes and practices that matter most to CIOs. But far too often, technical architecture is accidental — a pile of stuff that’s accumulated over time without any overall plan.


Preparing for the 'golden age' of artificial intelligence and machine learning

"Implementing an AI solution is not easy, and there are many examples of where AI has gone wrong in production," says Tripti Sethi, senior director at Avanade. "The companies we have seen benefit from AI the most understand that AI is not a plug-and-play tool, but rather a capability that needs to be fostered and matured. These companies are asking 'what business value can I drive with data?' rather than 'what can my data do?'" Skills availability is one of the leading issues that enterprises face in building and maintaining AI-driven systems. Close to two-thirds of surveyed enterprises, 62%, indicated that they couldn't find talent on par with the skills requirements needed in efforts to move to AI. More than half, 54%, say that it's been difficult to deploy AI within their existing organizational cultures, and 46% point to difficulties in finding funding for the programs they want to implement. ... In recent months and years, AI bias has been in the headlines, suggesting that AI algorithms reinforce racism and sexism. 


Skilling in the IT sector for a post pandemic era – An Experts View

“When there’s a necessity, innovations follow,” said Mahipal Nair (People Development & Operations Leader, NielsenIQ). The company moved from people-interaction-dependent learning to digital methods to navigate skilling priorities. As consumer expectations change, leadership and social skills have become a priority for workplace performance. “The way to solve this is not just to transform current talent, but create relevant talent,” said Nilanjan Kar (CRO, Harappa). Echoing the sentiment, Kirti Seth (CEO, SSC NASSCOM) added that “learning should be about principles, and it should enable employees to make the basics their own.” This will help create a learning organization that can contextualize change across the industry to stay relevant and map the desired learning outcomes. While companies upskill their workforce on these priorities, the real question is what skills will be required? Anupal Banerjee (CHRO, Tata Technologies) noted that “with the change in skills, there are multiple levels to focus on. While one focus area is on technical skills, the second is on behavioral skills. ...”.


Re-evaluating Kafka: issues and alternatives for real-time

By nature, your Kafka deployment is pretty much guaranteed to be a large-scale project. Imagine operating an equally large-scale MySQL database that is used by multiple critical applications. You’d almost certainly need to hire a database administrator (or a whole team of them) to manage it. Kafka is no different. It’s a big, complex system that tends to be shared among multiple client applications. Of course it’s not easy to operate! Kafka administrators must answer hard design questions from the get-go. This includes defining how messages are stored in partitioned topics, retention, and team or application quotas. We won’t get into detail here, but you can think of this task as designing a database schema, but with the added dimension of time, which multiplies the complexity. You need to consider what each message represents, how to ensure it will be consumed in the proper order, where and how to enact stateful transformations, and much more — all with extreme precision.


Climbing to new heights with the aid of real-time data analytics

Enter hybrid analytics. The world of data management has been reimagined with analytics at the speed of transactions made possible, through simpler processes, and a single hybrid system breaking down the walls between transactions and analytics. It’s possible through hybrid analytics to avoid the movement of information from databases to data warehouses and allow simple real-time data processing. This innovation enables enhanced customer experiences and a more data-driven approach to decision making thanks to the deeper business insights delivered through a hybrid system. Thanks to hybrid analytics, real-time allows a faster time to insight. It’s also possible for businesses to better understand their customers with no long, complex processes while the feedback loop is also made shorter for increased efficiency. It’s this approach that delivers a data-driven competitive advantage for businesses. Both developers and database administrators can access and manage data far easier, only having to deal with one connected system with no database sprawl.


Why DevSecOps fails: 4 signs of trouble

When Haff says that some organizations make the mistake of not giving DevSecOps its due, he adds that the people and culture component is most often the glaring omission. Of course, it’s not actually “glaring” until you realize that your DevSecOps initiative has fallen flat and you start to wonder why. One way you might end up traveling this suboptimal path: You focus too much on technology as the end-all solution rather than a layer in a multi-faceted strategy. “They probably have adopted at least some of the scanning and other tooling they need to mitigate various types of threats. They’re likely implementing workflows that incorporate automation and interactive development,” Haff says. “What they’re less likely paying less attention to – and may be treating as an afterthought – is people and culture.” Just as DevOps was about more than a toolchain, DevSecOps is about more than throwing security technologies at various risks. “An organization can get all the tools and mechanics right but if, for example, developers and operations teams don’t collaborate with your security experts, you’re not really doing DevSecOps,” Haff says.



Quote for the day:

"Authentic leaders are often accused of being 'controlling' by those who idly sit by and do nothing" --John Paul Warren