Showing posts with label domain model. Show all posts
Showing posts with label domain model. Show all posts

Daily Tech Digest - September 22, 2023

HR Leaders’ strategies for elevating employee engagement in global organisations

In the age of AI, HR technologies have emerged as powerful tools for enhancing employee engagement by streamlining HR processes, improving communication, and personalising the employee experience. Sreedhara added “By embracing HR Tech, we can enhance the employee experience by reducing administrative burdens, improving access to information, and enabling employees to focus on more meaningful aspects of their work. Moreover, these technologies can contribute to greater employee engagement. Enhancing employee experience via HR tech and tools can improve efficiency, and empower employees to take more control of their work-related tasks. We have also enabled some self-service technologies like: Employee portal that serves all HR-related tasks, and access to policies and processes across the employee life cycle - Onboarding, performance management, benefits enrolment, and expense management;  Employee feedback and surveys; Databank for predictive analysis (early warning systems) and manage employee engagement.”


Bolstering enterprise LLMs with machine learning operations foundations

Risk mitigation is paramount throughout the entire lifecycle of the model. Observability, logging, and tracing are core components of MLOps processes, which help monitor models for accuracy, performance, data quality, and drift after their release. This is critical for LLMs too, but there are additional infrastructure layers to consider. LLMs can “hallucinate,” where they occasionally output false knowledge. Organizations need proper guardrails—controls that enforce a specific format or policy—to ensure LLMs in production return acceptable responses. Traditional ML models rely on quantitative, statistical approaches to apply root cause analyses to model inaccuracy and drift in production. With LLMs, this is more subjective: it may involve running a qualitative scoring of the LLM’s outputs, then running it against an API with pre-set guardrails to ensure an acceptable answer. Governance of enterprise LLMs will be both an art and science, and many organizations are still understanding how to codify them into actionable risk thresholds. 


Reimagining Application Development with AI: A New Paradigm

AI-assisted pair programming is a collaborative coding approach where an AI system — like GitHub Copilot or TestPilot — assists developers during coding. It’s an increasingly common approach that significantly impacts developer productivity. In fact, GitHub Copilot is now behind an average of 46 percent of developers’ code and users are seeing 55 percent faster task completion on average. For new software developers, or those interested in learning new skills, AI-assisted pair programming are training wheels for coding. With the benefits of code snippet suggestions, developers can avoid struggling with beginner pitfalls like language syntax. Tools like ChatGPT can act as a personal, on-demand tutor — answering questions, generating code samples, and explaining complex code syntax and logic. These tools dramatically speed the learning process and help developers gain confidence in their coding abilities. Building applications with AI tools hastens development and provides more robust code. 


Don't Let AI Frenzy Lead to Overlooking Security Risks

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit. Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said. "There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about." Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.


The second coming of Microsoft's do-it-all laptop is more functional than ever

Microsoft's Surface Laptop Studio 2 is really unlike any other laptop on the market right now. The screen is held up by a tiltable hinge that lets it switch from what I'll call "regular laptop mode" to stage mode (the display is angled like the image above) to studio mode (the display is laid flat, screen-side up, like a tablet). The closest thing I can think of is, well, the previous Laptop Studio model, which fields the same shape-shifting form factor. But after today, if you're the customer for Microsoft's screen-tilting Surface device, then your eyes will be all over the latest model, not the old. That's a good deal, because, unlike the predecessor, the new Surface Laptop Studio 2 features an improved 13th Gen Intel Core H-class processor, NVIDIA's latest RTX 4050/4060 GPUs, and an Intel NPU on Windows for video calling optimizations (which never hurts to have). Every Microsoft expert on the demo floor made it clear to me that gaming and content creation workflows are still the focus of the Studio laptop, so the changes under the hood make sense.


Why more security doesn’t mean more effective compliance

Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes. In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits. This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls.


EU Chips Act comes into force to ensure supply chain resilience

“With the entry into force today of the European Chips Act, Europe takes a decisive step forward in determining its own destiny. Investment is already happening, coupled with considerable public funding and a robust regulatory framework,” said Thierry Breton, commissioner for Internal Market, in comments posted alongside the announcement. “We are becoming an industrial powerhouse in the markets of the future — capable of supplying ourselves and the world with both mature and advanced semiconductors. Semiconductors that are essential building blocks of the technologies that will shape our future, our industry, and our defense base,” he said. The European Union’s Chips Act is not the only government-backed plan aimed at shoring up domestic chip manufacturing in the wake of the supply chain crisis that has plagued the semiconductor industry in recent years. In the past year, the US, UK, Chinese, Taiwanese, South Korean, and Japanese governments have all announced similar plans.


Microsoft Copilot Brings AI to Windows 11, Works Across Multiple Apps and Your Phone

With Copilot, it's possible to ask the AI to write a summary of a book in the middle of a Word document, or to select an image and have the AI remove the background. In one example, Microsoft showed a long email and demonstrated that when you highlight the text, Copilot appears so you can ask it questions related to the email. And that information can be cross-referenced to information found online, such as asking Copilot for lunch spots nearby based on the email's content. Copilot will be available on the Windows 11 desktop taskbar, making it instantly available at one click. Microsoft says that whether you're using Word, PowerPoint or Edge, you can call on Copilot to assist you with various tasks. It can also be called on via voice. Copilot can connect to your phone, so, for example, you can ask it when your next flight is and it'll look through your text messages and find the necessary information. Edge, Microsoft's web browser, will also have Copilot integrations. 


What Are the Biggest Lessons from the MGM Ransomware Attack?

Ransomware groups increasingly focus on branding and reputation, according to Ferhat Dikbiyik, head of research at third-party risk management software company Black Kite. “When ransomware first made its appearance, the attacks were relatively unsophisticated. Over the years, we have observed a marked elevation in their capabilities and tactics,” he tells InformationWeek in a phone interview. ... The group also called out: “The rumors about teenagers from the US and UK breaking into this organization are still just that -- rumors. We are waiting for these ostensibly respected cybersecurity firms who continue to make this claim to start providing solid evidence to support it.” Dikbiyik also notes that ransomware groups’ more nuanced selection of targets is an indication of increased professionalism. “These groups are doing their homework. They have resources. They acquire intelligence tools…they try to learn their targets,” he says. While ransomware is lucrative, money isn’t the only goal. Selecting high-profile targets, such as MGM, helps these groups to build a reputation, according to Dikbiyik.


A Dimensional Modeling Primer with Mark Peco

“Dimensional models are made up of two elements: facts and dimensions,” he explained. “A fact quantifies a property (e.g., a process cost or efficiency score) and is a measurement that can be captured at a point in time. It’s essentially just a number. A dimension provides the context for that number (e.g., when it was measured, who was the customer, what was the product).” It’s through combining facts and dimensions that we create information that can be used to answer business questions, especially those that relate to process improvement or business performance, Peco said. Peco went on to say that one of the biggest challenges he sees with companies using dimensional models is with integrating the potentially huge number of models into one coherent picture of the business. “A company has many, many processes,” he said, “and each requires its own dimensional model, so there has to be some way of joining these models together to give a complete picture of the organization.” 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden

Daily Tech Digest - February 20, 2023

How quantum computing threatens internet security

“Basically, the problem with our current security paradigm is that it relies on encrypted information and decryption keys that are sent over a network from sender to receiver. Regardless of the way the messages are encrypted, in theory, someone can intercept and use the keys to decrypt apparently secure messages. Quantum computers simply make this process faster,” Tanaka explains. “If we dispense with this key-sharing idea and instead find a way to use unpredictable random numbers to encrypt information, the system might be immune. [Muons] are capable of generating truly unpredictable numbers.” The proposed system is based on the fact that the speed of arrival of these subatomic particles is always random. This would be the key to encrypt and decrypt the message, if there is a synchronized sender and receiver. In this way, the sending of keys would be avoided, according to the Japanese team. However, muon detection devices are large, complex and power-hungry, limitations that Tanaka believes the technology could ultimately overcome.


Considering Entrepreneurship After a Successful Corporate Career?

Here Are 3 Things You Need to Know.Many of you may be concerned that a transition could alienate your audience and force you to wait before making a move. But this is a common misconception rooted in the idea that your personal brand reflects what you do professionally. At Brand of a Leader, we help our clients shift their thinking by showing them that their personal brand is who they are, not what they do. The goal of personal brand discovery is to understand your essence and package it in a way that appeals to others. Your vocation is only one of your key talking points, and when you pivot, you simply shift those points while maintaining the essence of your brand. So, when should you start building your personal brand? The answer is simple: the sooner, the better. Building a brand takes time — time to build an audience, create visibility and establish associations between your name and consistent perceptions in people's minds. Starting sooner means you'll start seeing results faster.


Establish secure routes and TLS termination with wildcard certificates

By default, the Red Hat OpenShift Container Platform uses the Ingress Operator to create an internal certificate authority (CA) and issue a wildcard certificate valid for applications under the .apps subdomain. The web console and the command-line interface (CLI) use this certificate. You can replace the default wildcard certificate with one issued by a public CA included in the CA bundle provided by the container userspace. This approach allows external clients to connect to applications running under the .apps subdomain securely. You can replace the default ingress certificate for all applications under the .apps subdomain. After replacing the certificate, all applications, including the web console and CLI, will be encrypted using the specified certificate. One clear benefit of using a wildcard certificate is that it minimizes the effort of managing and securing multiple subdomains. However, this convenience comes at the cost of sharing the same private key across all managed subdomains.


Overcoming a cyber “gut punch”: An interview with Jamil Farshchi

Your biggest enemies in a breach are time and perfection. Everyone wants everything done in a split second. And having perfect information to construct perfect solutions and make perfect decisions is impossible. Time and perfection will ultimately crush you. By contrast, your two greatest allies are communication and optionality. Communication is being able to lay out the story of where things are, and to make sure everyone is rowing in the same direction. It’s being able to communicate the current status, and your plans, to regulators—and at the same time being able to reassure your customers and make sure they have confidence that you’re going to be able to navigate to the other side. Optionality is critical, because no one makes perfect decisions in this kind of firefight. Unless you’re comfortable making decisions that might not be right at any given point in time, you’re going to fail. [As a leader,] you need to frame up a program and the decisions you’re making in such a way that you’re comfortable rolling them back or tailoring them as you learn more, and as things progress.


7 reasons to avoid investing in cyber insurance

Two things organizations might want to consider right off the bat when contemplating an insurance policy are the cost to and benefit for the business, SecAlliance Director of Intelligence Mick Reynolds tells CSO. “When looking at cost, the recent spate of ransomware attacks globally has seen massive increases in premiums for firms wishing to include coverage of such events. Renewal quotes have, in some cases, increased from around £100,000 ($120,000) to over £1.5 million ($1.8 million). Such massive increases in premiums, for no perceived increase in coverage, are starting now to be challenged by board risk committees as to the overall value they provide, with some now deciding that accepting exposure to major cyber events such as ransomware is preferable to the cost of the associated policy.” As for benefits to the business, insurance is primarily taken out to cover losses incurred during a major cyber event, and 99% of the time these losses are quantifiable and relate predominantly to response and recovery costs, Reynolds says.


The importance of plugging insurance cyber response gaps

The insurance industry is a lucrative target as organisations hold large amounts of private and sensitive information about their policy holders who, rightfully so, have the expectation of their data being kept safe and secure. This makes it no surprise that the industry is a key target for cyber criminals due to the massive disruption it can cause and the potential high financial reward on offer. Research shows that 82 per cent of the largest insurance carriers were the focus of ransom attacks in 2022. It is expected that the insurance industry will only become a more favourable target, and these types of disruptions will become increasingly severe. The insurance industry is one that has embraced innovation and new forms of technology in its practices over recent years in order to offer their customers a seamless experience. In doing so, alongside the onset of remote working catalysed by the pandemic, they have increased their threat surface. ... These are just the tip of the iceberg, so when cyber criminals look to exploit data, the insurance industry is a primary target due its huge customer base.


Value Chain Analysis: Best Practices for Improvements

To stay competitive, organizations must ensure that they have picked the right partners for each of the functions in the value chain, and that appropriate value is captured by each participant. “In addition to ensuring each participant’s value and usefulness in the chain, value chain analysis enables organizations to periodically verify that functions are still necessary, and that value is being delivered efficiently without undue waste such as administrative burden, communications costs or transit or other ancillary functions,” he says. Business leaders and IT leaders like the chief information officer and chief data officer must prove that they are benefiting the bottom line. While it is time consuming, value chain analysis is a key method to examine company value -- an essential practice during times of high stakes and economic uncertainty. Jon Aniano, senior vice president, Zendesk, adds running a full VCA requires analyzing and tracking a massive amount of data across your entire company.


Cybersecurity takes a leap forward with AI tools and techniques

“An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.” DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost. It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.


9 ways ChatGPT will help CIOs

“ChatGPT is very powerful out of the box, so it doesn’t require extensive training or teaching to get up to speed and handle specific business processes. A valuable initial business application for ChatGPT should be directed towards routine tasks, such as filling out a contract. It can effectively review the document and answer the necessary fields using the data and context provided by the organization. With that said, ChatGPT has the potential to shoulder administrative burdens for CIOs quickly, but it’s important to regularly measure the accuracy of its work, especially if an organization plans to use it regularly. The best way for CIOs to get started with ChatGPT is to take the time to grasp how it would work within the context of their organization before rushing to widespread adoption. At these early stages of the technology, it’s better to let it complement existing workflows under close supervision instead of restructuring around it as an end-to-end solution. 


Art Of Knowledge Crunching In Domain Driven Design

Miscommunication during knowledge crunching sessions would have different reasons, such as cognitive bias, which is a type of error in reasoning, decision-making, and perception that occurs due to the way our brains perceive and process information. This type of bias occurs when an individual’s cognitive processes lead them to form inaccurate conclusions or make irrational decisions. For example, when betting on a roulette table, if previous outcomes have landed on red, then we might mistakenly assume that the next outcome will be black; however, these events are independent of each other (i.e., the probability of their results do not affect each other). Also, apophenia is the tendency to perceive meaningful connections between unrelated things, such as conspiracy theories or the moment we think we get it but actually, we do not get it. A good example of this could be an image sent from Mars that includes a shape on a rock that you might think is the face of an alien, but it’s just a random shape of a rock.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - June 23, 2021

Take My Drift Away

Drift is a change in distribution over time. It can be measured for model inputs, outputs, and actuals. Drift can occur because your models have grown stale, bad data is flowing into your model, or even because of adversarial inputs. Now that we know what drift is, how can we keep track of it? Essentially, tracking drift in your models amounts to keeping tabs on what had changed between your reference distribution, like when you were training your model, and your current distribution (production). Models are not static. They are highly dependent on the data they are trained on. Especially in hyper-growth businesses where data is constantly evolving, accounting for drift is important to ensure your models stay relevant. Change in the input to the model is almost inevitable, and your model can’t always handle this change gracefully. Some models are resilient to minor changes in input distributions; however, as these distributions stray far from what the model saw in training, performance on the task at hand will suffer. This kind of drift is known as feature drift or data drift. It would be amazing if the only things that could change were the inputs to your model, but unfortunately, that’s not the case.


7 best practices for enterprise attack surface management

To mount a proper defense, you must understand what digital assets are exposed, where attackers will most likely target a network, and what protections are required. So, increasing attack surface visibility and building a strong representation of attack vulnerabilities is critical. The types of vulnerabilities to look for include older and less secure computers or servers, unpatched systems, outdated applications, and exposed IoT devices. Predictive modeling can help create a realistic depiction of possible events and their risks, further strengthening defense and proactive measures. Once you understand the risks, you can model what will happen before, during and after an event or breach. What kind of financial loss can you expect? What will be the reputational damage of the event? Will you lose business intelligence, trade secrets or more? “The successful [attack surface mapping] strategies are pretty straightforward: Know what you are protecting (accurate asset inventory); monitor for vulnerabilities in those assets; and use threat intelligence to know how attackers are going after those assets with those vulnerabilities,” says John Pescatore, SANS director of emerging security trends.


How Chainyard built a blockchain to bring rivals together

There’s the technology of building the blockchain, and then there’s building the network and the business around that. So there are multiple legs to the stool, and the technology is actually the easiest piece. That’s just establishing architecturally how you want to embody that network, how many nodes, how many channels, how your data is going to be structured, and how information is going to move among the blockchain. But the more interesting and challenging exercise, as is true with any network, is participation. I think it was Marc Andreessen who famously said “People are on Facebook because people are on Facebook.” You have to drive participation, so you have to consider how to bring participants to this network, how organizations can be engaged, and what’s going to make it compelling for them. What’s the value proposition? What are they going to get out of it? How do you monetize and how do you operate it? And you can’t figure that on the fly. So we went out to bring the top-of-the-food-chain organizations in various industries on board, so they can help establish the inertia for the network to take off. 


Strategies, tools, and frameworks for building an effective threat intelligence team

The big three frameworks are the Lockheed Martin Cyber Kill Chain®, the Diamond Model, and MITRE ATT&CK. If there’s a fourth, I would add VERIS, which is the framework that Verizon uses for their annual Data Breach Investigations Report. I often get asked which framework is the best, and my favorite answer as an analyst is always, “It depends on what you’re trying to accomplish.” The Diamond Model offers an amazing way for analysts to cluster activity together. It’s very simple and covers the four parts of an intrusion event. For example, if we see an adversary today using a specific malware family plus a specific domain pattern, and then we see that combination next week, the Diamond Model can help us realize those look similar. The Kill Chain framework is great for communicating how far an incident has gotten. We just saw reconnaissance or an initial phish, but did the adversary take any actions on objectives? MITRE ATT&CK is really useful if you’re trying to track down to the TTP level. What are the behaviors an adversary is using? You can also incorporate these different frameworks.


Bulding a Scalable Data Service in the Modern Microservices World

The microservices architecture not only makes the whole application much more decoupled and cohesive, it also makes the teams more agile to make frequent deployments without interrupting or depending on others. The communication among services is most commonly done using HyperText Transfer Protocol. The Request and Response format (XML or JSON) is known as API Contract and that’s what binds services together to form the complete behaviour of the application. In the given example above, we are talking about an application that serves both Web and Mobiles users, and allows external services to integrate using REST API endpoints provided to end-users. Each of the use cases have their own endpoints exposed in front of individual Load Balancers that manages Incoming Requests with best available resources. Each of the internal services contains a Web Server that handles all incoming requests and forwards them to the right services or sends it to in-house application, an Application Server that hosts all the business logic of the microservice, and a quasi-persistent layer, a Local Replication of the Database based on Spatial and/or Temporal locality of data.


Validation of Autonomous Systems

Autonomous systems have complex interactions with the real world. This raises many questions about the validation of autonomous systems: How to trace back decision making and judge afterwards about it? How to supervise learning, adaptation, and especially correct behaviors – specifically when critical corner cases are observed? Another challenge would be how to define reliability in the event of failure. With artificial intelligence and machine learning, we need to satisfy algorithmic transparency. For instance, what are the rules in an obviously not anymore algorithmically tangible neural network to determine how an autonomous system might react with several hazards at the same time? Classic traceability and regression testing will certainly not work. Rather, future verification and validation methods and tools will include more intelligence based on big data exploits, business intelligence, and their own learning, to learn and improve about software quality in a dynamic way.

The New Future Of Work Requires Greater Focus On Employee Engagement

When it comes down to it, engagement is all about employee empowerment—helping employees not just be satisfied in their work but feeling like a valued member of the team. Unfortunately 1 in 4 is planning to look for work with a new employer once the pandemic is over largely due to a lack of empowerment in the workplace—a lack of advancement, upskilling opportunities, and more. Organizations like Amazon, Salesforce, Microsoft, AT&T, Cognizant and others have started upskilling initiatives designed to help employees, wherever they are in the company, advance to new positions. These organizations are taking an active role in the lives of their employees and are helping them grow. These reasons are likely why places like Amazon repeatedly top the list for best places to work. Before the pandemic, just 24% of businesses felt employee engagement was a priority. Following the pandemic, the number hit nearly 36%. Honestly, that’s still shockingly low! It’s just common sense that engaged employees will serve a company better.


Architectural Considerations for Creating Cloud Native Applications

The ability to deploy applications with faster development cycles also opens the door to more flexible, innovative, and better-tailored solutions. All this undoubtedly positively impacts customer loyalty, increases sales, and lowers operating costs, among other factors. As we mentioned, microservices are the foundation of cloud native applications. However, their real potential can be leveraged by containers, which allows them to package the entire runtime environment and all its dependencies, libraries, binaries, etc., into a manageable, logical unit. Application services can then be transported, cloned, stored or used on-demand as required. From a developer’s perspective, the combination of microservices and containers can support the 12-Factor App methodology. This methodology aims primarily to avoid the most common problems programmers face when developing modern cloud native applications. The benefits of following the guidelines proposed by the 12 Factors methodology are innumerable.


How to be successful on the journey to the fully automated enterprise

When first embarking on automation, many businesses feel like they would like to keep their options open and use the time available to explore what automation can do for their teams and their businesses. The first step in journey to full automation is often a testing phase which relies on proving a return on investment and consequently convincing the C-suite, departmental heads, and IT of its benefits. Next, once automation has been added to the agenda, in order to support with providing a centralised view and governance, organizations should create an RPA Centre of Excellence to champion and drive use of the technology. At this stage, select processes are chosen, often in isolation, based on the fact that they have high-potential but are low-value tasks which can quickly be automated and show immediate returns in terms of increased productivity or customer satisfaction. This top-down, process-by-process approach, implemented by RPA experts, will help automation programs get off the ground. NHS Shared Business Service (SBS), for example, chose the highly labour-intensive task of maintaining cashflow files as its first large-scale automation.


SOC burnout is real: 3 preventative steps every CISO must take

While most technology solutions aim to make the SOC/IR more efficient and effective, all too often organizations take one step forward and two steps back if the solution creates ancillary workloads for the team. The first measurement of a security tool is if it addresses the pain or gap that the organization needs to fill. The second measurement is if the tool is purpose-built by experts who understand the day-to-day responsibilities of the SOC/IR team and consider those as requirements in the design of their solution. As an example, there is a trend in the network detection and response (NDR) market to hail the benefits of machine learning (ML). Yes, ML helps to identify adversary behavior faster than manual threat hunting, but at what cost? Most anomaly-based ML NDR solutions require staff to perform in-depth “detection training” for four weeks plus tedious ongoing training to attempt to make the number of false positives “manageable.” Some security vendors are redefining their software as a service (SaaS) offering as Guided-SaaS. Guided-SaaS security allows teams to focus on what matters – adversary detection and response. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - October 21, 2020

6 tips for CIOs managing technical debt

Many applications are created to solve a specific business problem that exists in the here-and-now, without thought about how that problem will evolve or what other adjacencies it pertains to. For example, a development team might jump into solving the problem of creating a database to manage customer accounts without taking into consideration how that database is integrated with the sales/prospecting database. This can lead to thousands of staff-hours downstream spent transforming contacts and importing them from the sales to the customer database. ... One of the best-known problems in large organizations is the disconnect between development and operations where engineers design a product without first considering how their peers in operations will support it, thus resulting in support processes that are cumbersome, error-prone and inefficient. The entire programming discipline of DevOps exists in large part to resolve this problem by including representatives from the operations team on the development team -- but the DevOps split exists outside programming. Infrastructure engineers may roll out routers, edge computers or SD-WAN devices without knowing how the devices will be patched or upgraded.


The Third Wave of Open Source Migration

The first and second open-source migration waves were periods of rapid expansion for companies that rose up to provide commercial assurances for Linux and the open-source databases, like Red Hat, MongoDB, and Cloudera. Or platforms that made it easier to host open source workloads in a reliable, consistent, and flexible manner via the cloud, like Amazon Web Services, Google Cloud, and Microsoft Azure. This trend will continue in the third wave of open source migration, as organizations interested in reducing cost without sacrificing development speed will look to migrate more of their applications to open source. They’ll need a new breed of vendor—akin to Red Hat or AWS—to provide the commercial assurances they need to do it safely.  It’s been hard to be optimistic over the last few months. But as I look for a silver lining in the current crisis, I believe there is an enormous opportunity for organizations to get even more nimble in their use of open source. The last 20+ years of technology history have shown that open source is a powerful weapon organizations can use to navigate a global downturn.


It’s Time to Implement Fair and Ethical AI

Companies have gotten the message that artificial intelligence should be implemented in a manner that is fair and ethical. In fact, a recent study from Deloitte indicates that a majority of companies have actually slowed down their AI implementations to make sure these requirements are met. But the next step is the most difficult one: actually implementing AI in a fair and ethical way. A Deloitte study from late 2019 and early 2020 found that 95% of executives surveyed said they were concerned about ethical risk in AI adoption. While machine learning brings the possibility to improve the quantity and quality of decision-making based on data, it also brings the potential for companies to damage their brand and reduce the trust that customers have placed in it if AI is implemented poorly. In fact, these risks were so palpable to executives that 56% of them say they have slowed down their AI adoptions, according to Deloitte’s study. While progress has been made in getting the message out about fair and ethical AI, there is still a lot of work to be done, says Beena Ammanath, the executive director of the Deloitte AI Institute. “The first step is well underway, raising awareness. Now I think most companies are aware of the risk associated” with AI deployments, Ammanath says.


C# designer Torgersen: Why the programming language is still so popular and where it's going next

Like all modern programming languages, C# continues to evolve. With C# 9.0 on course to arrive in November, the next update will focus on supporting "terse and immutable" (i.e. unchangeable) representation of data shapes. "C# 9.0 is trying to take some next steps for C# in making it easier to deal with data that comes over the wire, and to express the right semantics for data, if you will, that comes out of what we call an object-oriented paradigm originally," says Torgersen. C# 9.0 takes the next step in that direction with a feature called Records, says Torgersen. These are a reference type that allow a whole object to be immutable and instead make it act like a value. "We've found ourselves, for a long time now, borrowing ideas from functional programming to supplement the object-oriented programming in a way that really helps with, for instance, cloud-oriented programming, and helps with data manipulation," Torgersen explains. "Records is a key feature of C# 9.0 that will help with that." Beyond C# 9.0 is where things get more theoretical, though. Torgersen insists that there's no concrete 'endgame' for the programming language – or at least, not until it finally reaches some as-yet unknown expiration date.


DOJ's antitrust fight with Google: how we got here

The DOJ said in its filing that this case is "just beginning." The government also says it's seeking to change Google's practices and that "nothing is off the table" when it comes to undoing the "harm" caused by more than a decade of anticompetitive business. Is it hard to compete with Google? The numbers speak for themselves. But that's because the company is darn good at what it does. Does Google use your data to help it improve search and advertising? Yes, it does. But this suit is not about privacy. It's about Google's lucrative advertising business. Just two years ago, the European Commission (EC) fined Google over €8 billion for various advertising violations. Though the DOJ is taking a similar tack, Google has done away with its most egregious requirements. These included exclusivity clauses, which stopped companies from placing competitors' search advertisements on their results pages and Premium Placement, which reserved the most valuable page real estate for Google AdSense ads. It's also true that Google has gotten much more aggressive about using its own search pages to hawk its own preferred partners. As The Washington Post's Geoffrey A. Fowler recently pointed out: if you search for "T Shirts" on Google, the first real search result appears not on row one, two, or three — those are reserved for advertising — or even rows four through eight.


7 Hard-Earned Lessons Learned Migrating a Monolith to Microservices

It’s tempting to go from legacy right to the bleeding edge. And it’s an understandable urge. You’re seeking to future-proof this time around so that you won’t face another refactor again anytime soon. But I’d urge caution in this regard, and to consider taking an established route. Otherwise, you may find yourself wrangling two problems at once, and getting caught in a fresh new rabbit hole. Most companies can’t afford to pioneer new technology and the ones that can tend to do it outside of any critical path for the business. ... For all its limitations, a monolithic architecture does have several intrinsic benefits. One of which is that it’s generally simple. You have a single pipeline and a single set of development tools. Venturing into a distributed architecture involves a lot of additional complexity, and there are lots of moving parts to consider, particularly if this is your first time doing it. You’ll need to compose a set of tools to make the developer experience palatable, possibly write some of your own, (although I’d caution against this if you can avoid it), and factor in the discovery and learning process for all that as well.


What is confidential computing? How can you use it?

To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors. To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology. Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic. ... With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.


What A CIO Wants You to Know About IT Decision Making

CIOs know the organization needs new ideas, new products, new services, etc. as well as changes to current rules, regulations, and business processes to grow markets and stay ahead of competition. CIOs also know that the rules, regulations, and processes are the foundations of trust. Those things that seem to inhibit new ideas are the things that open customer’s minds to the next new thing an organization might offer. Without the trust established by following the rules, adhering to regulations, and at the far extreme, simply obeying the law, customers would not stick around to try the next new thing. For proof, look at the stock price of organizations that publicly announce IT hacks, data loss, or other trust breaking events. Customers leave when trust is broken, and part of the CIO’s role is to maintain that trust. While CIOs know the standards that must be upheld, they also know how to navigate those standards to support new ideas and change requests. Supporting new ideas and adapting to change requires input from you as the user, the employee or another member of the IT department, beyond just submitting the IT change form or other automated process.


The Biggest Reason Not to Go All In on Kubernetes

Here’s the big thing that gets missed when a huge company open-sources their internal tooling – you’re most likely not on their scale. You don’t have the same resources, or the same problems as that huge company. Sure, you are working your hardest to make your company so big that you have the same scaling problems as Google, but you’re probably not there yet. Don’t get me wrong: I love when large enterprises open-source some of their internal tooling, as it’s beneficial to the open-source community and it’s a great learning opportunity, but I have to remind myself that they are solving a fundamentally different problem than I am. While I’m not suggesting that you avoid planning ahead for scalability, getting something like Kubernetes set up and configured instead of developing your main business application can waste valuable time and funds. There is a considerable time and overhead investment for getting your operations team up to speed on Kubernetes that may not pay out. Google can afford to have its teams learning, deploying, and managing new technology. But especially for smaller organizations, premature scaling or premature optimization are legitimate concerns. You may be attracted to the scalability, and it’s exciting. But, if you implement too early, you will only get the complexity without any of the benefit.


Did Domain Driven Design help me to ease out the approach towards Event Driven Architecture?

The most important aspect of the Domain Driven Design is setting the context of a domain/sub-domain where domain would be a very high-level segregation for different areas of business and sub-domain would be a particular part in the domain representing a structure where users use a specific ubiquitous language with domain model. Without going into much detail of the DDD, another paradigm that one should be aware of is context mapping which consists of identifying and classifying the relationships between bounded contexts within the domain. One or more contexts can be related to each other in terms of goals, reuse components (codes), a consumer and a producer. ... The principles guiding the conglomeration of DDD and events help us to shift the focus from the nouns (the domain objects) to the verbs (the events) in the domain. Focusing on flow of events helps us to understand how change propagates in the system — things like communication patterns, workflow, figuring out who is talking to whom, who is responsible for what data, and so on. Events represent facts about the domain and should be part of the Ubiquitous Language of the domain. 



Quote for the day:

“The only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.” -- Steve Jobs

Daily Tech Digest - March 16, 2019


Even if blockchains provide data immutability, the amount of transaction throughput that blockchains can support compared to those of transaction platforms currently in production is tiny. The best blockchain deployments that are known today maybe can handle 10,000 transactions per second, according to Parizo. “That is controversial because so few people understand the details and those systems are not truly blockchain,” he added. “You have to dissemble blockchain until it is no longer blockchain to get it to scale.” However, blockchain deployments do not need to compete with such implementations. The technology’s sweet spot is in environments where there are low volumes of highly valuable discrete transactions, according to Peter Lindstrom, vice president of securities strategies at IDC and who moderated the panel. Blockchain’s greatest weakness may be its reliance on public key encryption, which can be a single point of failure. “If the key is lost, so is the data and, potentially, the transaction,” said Parizo. “If the key is compromised, someone else can access the data or the related asset.”



“Software will account for 90 percent of future innovations in the car,” Herbert Diess told VW’s annual press conference. Volkswagen is retooling its strategy in the wake of the so-called dieselgate scandal, which has cost it more than 28 billion euros ($32 billion) in fines and penalties after the uncovering in 2015 of VW’s use of engine management software to mask excess pollution levels. Demand for software functions has risen exponentially as customers increasingly expect advanced driver assistance systems, smartphone connectivity and self-driving functions. “Today our 20,000 developers are 90 percent hardware-oriented. That will change radically by 2030. Software will account for half of our development costs,” Diess said. Compared to a smartphone, a car has ten times as many lines of software code, and a self-driving car will have a thousand times that amount, Diess explained.


“The stakes suddenly just got higher, which is why governments are really worrying about it, but on the positive side, what they really want to build in trust and security early.” To address this, Hannigan said there are three key things to do. First, understand the risks better such as the complex and deep interdependencies in modern supply chains. “Many companies do not really understand the vulnerabilities in their supply chains and the risks they are exposed to as a result.” Second, he said, security needs to be retro-fitted to infrastructure that was not designed with security in mind. “An obvious example is the trusted platform module, where industry worked together to show that it can be done. “And the third thing we need to do is to ensure that everything we build is secure by design and by default, and every government is worrying about this,” said Hannigan. “Building in security and trust when you design something is absolutely critical, and every government is looking at regulation on this.”


After the Cambridge Analytica scandal which found Facebook complicit in allowing the firm to harvest millions of user profiles for political purposes without their consent, politicians around the world are demanding Facebook be regulated. Consumer trust in Facebook was shattered following the scandal. A Ponemon Institute survey found a 66% decline in consumer trust in advance of Zuckerberg’s Senate testimony where it was clear that most senators did not understand what Facebook does. So, following a significant data breach, a titanic loss of consumer trust, calls by numerous politicians for regulation, and a massive service outage, Facebook wants to become a bank issuing its own cryptocurrency. A year is a long time in social media. Banking and financial services are built on consumer trust and Facebook is overdrawn in the trust account. Bankers’, politicians’, policy makers’, and regulators’ spider senses are tingling. Whilst the last decade has been a decennium horribilis for the banking sector, from the Lehman Brothers sub-prime mortgage driven bankruptcy to the Wells Fargo account fraud scandal, consumer trust and confidence in banks has also been eroded.


Tech-proofing the millennial workplace of the future

null
As worker expectations evolve, so must the abilities of employers, who need to recognise the impact that these demands will have on their workplace. Employers should prepare themselves to meet the needs of tech-savvy workers of the future, who will make up the workforce of tomorrow. Millennials are already dominating the workplace – 160 million currently make up the European workforce – and this figure is only set to increase, with millennials due to account for 75 per cent of the global workforce by 2025. The future generation of workers possess the digital skills that organisations need in order to achieve long-term success. They bring new perspectives and habits to the workplace, and their tech-savvy knowhow is invaluable. Consequently, companies must tailor their office set-ups to their needs and expectations, as the numbers of this age continue to swell the working ranks. Research has shown that 25-to-34-year-olds are the most enthusiastic age segment about tech-enabled working conditions. So, when it comes to recruitment, a tooled-up office could help with hiring these younger workers.


TEMPO And The Art Of Disruption

Boyd’s analysis revealed that the ace pilots had faster OODA loops: they were able to observe, orient, decide, and act more quickly than their peers. By continually shortening their OODA loops, and thus increasing the tempo of the battle, they consistently caught their opponents off-guard. According to Boyd, when the loop is so fast and tight that a competitor’s response rate drops to zero, the opponent with the faster tempo has disrupted the competitor—and the end result is victory. The same concept applies to today’s uncertain business environment. Disruptors—the most agile, responsive, and aggressive companies—put the squeeze on competitors with a similar dynamic loop. But since a solo pilot’s reaction time is unique to the circumstances and is far faster than an organi­zation’s, we have adjusted the loop to better reflect that business reality. Our business version consists of four repeating aspects: scan, orient, decide, and act (SODA). Disruptors continually scan the landscape, orient themselves to new circumstances, decide how to respond, and act quickly.


At these factories, robots are making jobs better for workers


“In one case, the company found that people are actually better than any robot when it comes to installing the interior and engine of the car,” explains Adrian. But BMW also found that some of that work requires more strength than the typical worker might possess. So it devised a “co-roboting” system, where a worker’s ability is augmented by a machine. “The operator on the left side of the car guides the installation,” Adrian explains, “while also controlling a robot positioned on the right side, which can apply tremendous torque to complete the fit wherever needed. So strength is no longer a barrier to entry for this role,” Adrian explains. “It’s open to anyone with the right skills.” Diego Hernandez-Diaz, who’s also an engagement manager, visited five factories through the project. “I was really impressed by the lengths to which one electronics manufacturer went to help its people learn new skills,” he says. “It built out a fully-spec’d, virtual version of its factory.


10 Deadly Mistakes to Avoid When Learning Java

To code or not to code? It seems that you’ve made your choice in favor of the first option. Programming is a great field for professional growth. It gives you an opportunity to take part in interesting projects and work wherever you want. The only obstacle that restrains many beginners from starting a new career is the lack of understanding of how exactly they should learn to code. What’s more important is that even the best universities can’t fully provide a complete programming education that will guarantee a stark career as a software developer. This is because programming is too dynamic and flexible: once you start learning, you better do it for the rest of your life. Some programmers say that they had to try learning how to code a few times before finally reaching their goal. Yes, we all learn by mistakes, but you’ll be surprised how many common lapses there are in mastering this skill.


How digital payment solutions will shape the future of banking


While technological advancements have been revolutionising the banking space in terms of biometric security through unique identifiers like fingerprints, facial recognition, and voice recognition, the advent of ‘big data’ is one of the most crucial interventions for the banking industry. Through effective storage, analysis, and interpretation of vast and complex sets of data, previously untapped patterns and trends can be uncovered for new client insights. This may result in significant commercial benefits while assuring privacy. Further, data management has the potential to make payments, finance, assurance, engagement, and banking more effective and tailor-made for each client, helping industry partners to optimise their internal processes and add value through a data-based business understanding. By extending these augmented data management competencies directly to clients, banks can make use of insights such as consumer-spending habits as a means of promoting cost saving by identifying frauds or errors, proving to be a source of competitive advantage.


Shadow IT a Risk to Operational Resilience of Financial Institutions

While providing enormous business flexibility, Shadow IT applications can pose a significant operational, regulatory or reputational risk to the business. For example, an uncontrolled spreadsheet might provide calculations that feed into multiple models. ... Worse, there would likely be no visibility of this change, so identifying and remediating it would take time, extending the scale of business and market impact that the Operational Resilience initiative is designed to address. While as yet, the UK regulators haven’t defined or scheduled any regulation relating to Operational Resilience, there’s no doubt that it’s on the horizon. Informal discussions with the regulators allude to this. Financial institutions need to build a framework for Shadow IT risk management. This will enable them to understand their Shadow IT landscape and the critical business services and processes these applications support, define the risk they pose to the institution’s operations, determine the potential financial, operational, regulatory and reputational impact of errors and establish governance processes for change.



Quote for the day:


"Leadership Principle: As hunger increases, excuses decrease." - Orrin Woodward


Daily Tech Digest - September 30, 2018

How to successfully implement an AI system

null
Companies should calculate the anticipated cost savings that would be gained with a successful AI deployment, using that as a starting point for investment so that costs of errors or short falls on expectations are minimised if they occur. The cost savings should be based on efficiency gains, as well as the increased productivity that can be harnessed in other areas of the business by freeing up staff from administration tasks. This ensures companies do not over-invest at the beginning before seeing initial results and if changes are necessary they do not cannibalise potential ROI and companies can still potentially switch to other viable alternative use cases.  Before advising companies on what solution they should invest in, it's important to first establish what they want to achieve. Digital colleagues can provide a far superior level of customer service however, they require greater resource to set up.  Most chatbots are not scalable, once deployed they cannot be integrated into other business areas as they are designed to answer FAQs based on a static set of rules. Unlike digital colleagues, they cannot understand complex questions or perform several tasks at once.


How adidas is Creating a Digital Experience That's Premium, Connected, and Personalized

Take something like a product description. How do we really have the product descriptions and offerings so that if you're interested in sports we will help you find exactly the product that you need for the sport that you're interested in? We will also educate you and bring you back at different points in time to help you find out what you need when you need it, or with an engagement program. Ultimately, like the membership program, that it has something that's sticky, that you can give back to something, even more, you can participate in events and experiences. For us, a lot of it’s really deepening those experiences but also exploring new technologies and new areas. Omnichannel was kind of the original wave which happened and I said it was the freight train that came past us a couple of years ago. Now we're also looking at what those next freight trains are, whether it's technologies like blockchain or experiencing picking up a new channel. For example, we're working extensively with Salesforce on automation, how we can automate consumer experiences.


What Deep Learning Can Offer to Businesses


With the capabilities of artificial intelligence, the way the words are processed and interpreted can be changed dramatically. It turns out we can define the meaning of the word based on its position in the text without the need of using a dictionary. ... One of the most recent successful appliances of deep learning for image recognition came from Large Scale Visual Recognition Challenge, when Alex Krizhevsky applied convolutional neural networks to organize images from ImageNet, a dataset containing 1.2 million pictures, into 1,000 different classes. In 2012, Krizhevsky’s network, AlexNet, achieved a top-5 test error rate of 15.3%, outperforming traditional computer vision solutions with more than 10% accuracy. The experience of Alex Krizhevsky changed the landscape of the data science and artificial intelligence field from the perspective of the research and business application. In 2012, AlexNet was the only deep learning model at ILSVRC (ImageNet Large Scale Visual Recognition Competition). Two years later, in 2014, there were no conventional computer vision solutions among the winners.



Can Global Semantic Context Improve Neural Language Models?

Global co-occurrence count methods like LSM lead to word representations that can be considered genuine semantic embeddings, because they expose statistical information that captures semantic concepts conveyed within entire documents. In contrast, typical prediction-based solutions using neural networks only encapsulate semantic relationships to the extent that they manifest themselves within a local window centered around each word (which is all that’s used in the prediction). Thus, the embeddings that result from such solutions have inherently limited expressive power when it comes to global semantic information. Despite this limitation, researchers are increasingly adopting neural network-based embeddings. Continuous bag-of-words and skip-gram (linear) models, in particular, are popular because of their ability to convey word analogies of the type “king is to queen as man is to woman.”


Big Data and Machine Learning Won’t Save Us from Another Financial Crisis


Machine learning can be very effective at short-term prediction, using the data and markets we have encountered. But machine learning is not so good at inference, learning from data about underlying science and market mechanisms. Our understanding of markets is still incomplete. And big data itself may not help, as my Harvard colleague Xiao-Li Meng has recently shown in “Statistical Paradises and Paradoxes in Big Data.” Suppose we want to estimate a property of a large population, for example, the percentage of Trump voters in the U.S. in November 2016. How well we can do this depends on three quantities: the amount of data (the more the better); the variability of the property of interest (if everyone is a Trump voter, the problem is easy); and the quality of the data. Data quality depends on the correlation between the voting intention of a person and whether that person is included in the dataset. If Trump voters are less likely to be included, for example, that may bias the analysis.


Spending on cognitive and AI systems to reach $77.6 billion in 2022

Banking and retail will be the two industries making the largest investments in cognitive/AI systems in 2018 with each industry expected to spend more than $4.0 billion this year. Banking will devote more than half of its spending to automated threat intelligence and prevention systems and fraud analysis and investigation while retail will focus on automated customer service agents and expert shopping advisors & product recommendations. Beyond banking and retail, discrete manufacturing, healthcare providers, and process manufacturing will also make considerable investments in cognitive/AI systems this year. The industries that are expected to experience the fastest growth on cognitive/AI spending are personal and consumer services (44.5% CAGR) and federal/central government (43.5% CAGR). Retail will move into the top position by the end of the forecast with a five-year CAGR of 40.7%. On a geographic basis, the United States will deliver more than 60% of all spending on cognitive/AI systems throughout the forecast, led by the retail and banking industries.


5 ways industrial AI is revolutionizing manufacturing

artificial intelligence / machine learning / network
In manufacturing, ongoing maintenance of production line machinery and equipment represents a major expense, having a crucial impact on the bottom line of any asset-reliant production operation. Moreover, studies show that unplanned downtime costs manufacturers an estimated $50 billion annually, and that asset failure is the cause of 42 percent of this unplanned downtime. For this reason, predictive maintenance has become a must-have solution for manufacturers who have much to gain from being able to predict the next failure of a part, machine or system. Predictive maintenance uses advanced AI algorithms in the form of machine learning and artificial neural networks to formulate predictions regarding asset malfunction. This allows for drastic reductions in costly unplanned downtime, as well as for extending the Remaining Useful Life (RUL) of production machines and equipment. In cases where maintenance is unavoidable, technicians are briefed ahead of time on which components need inspection and which tools and methods to use, resulting in very focused repairs that are scheduled in advance.


Data Centers Must Move from Reducing Energy to Controlling Water

While it is a positive development that overall energy for data centers is being reduced around the globe, a key component that has — for the most part — been washed over is water usage. One example of this is the continued use of open-cell towers. They take advantage of evaporative cooling to cool the air with water before it goes into the data center. And while this solution reduces energy, the water usage is very high. Raising the issue of water reduction is the first step in creating ways our industry can do something about it. As we experience the continued deluge of the “Internet of Things”—projected to exceed 20 billion devices by 2020, we will only be able to ride this wave if we keep energy low and start reducing water usage. The first question becomes how can cooling systems reject heat more efficiently? Let’s say heat is coming off the server at 100 degrees Fahrenheit. The idea is to efficiently capture heat and bring it to the atmosphere as close to that temperature as possible — but it is all dependent on the absorption system.


AI and Automation to Have Far Greater Effect on Human Jobs by 2022

AI and Automation to Have Far Greater Effect on Human Jobs by 2022 (Infographic)
With the domination of automation in a business framework, the workforce can be extended to new productivity-enhancing roles. More than a quarter of surveyed businesses expect automation to lead to the creation of new roles in their enterprise. Apart from allotting contractors more task-specialized work, businesses plan to engage workers in a more flexible manner, utilizing remote staffing beyond physical offices and decentralization of operations. Among all, AI adoption has taken the lead in terms of automation for the reduction of time and investment in end-to-end processes. “Currently, AI is the most rapidly growing technology and will for sure create a new era of the modern world. It is the next revolution- relieving humans not only from physical work but also mental efforts and simplifies tasks extensively,” opined Kuppa. While human-performed tasks dominate today’s work environment, the frontier is expected to change in the coming years.


Modeling Uncertainty With Reactive DDD

Reactive is a big thing these days, and I'll explain later why it's gaining a lot of traction. What I think is really interesting is that the way DDD was used or implemented, say back in 2003, is quite different from the way that we use DDD today. If you've read my red book, Implementing Domain-Driven Design, you're probably familiar with the fact that the bounded contexts that I model in the book are separate processes, with separate deployments. Whereas, in Evan's blue book, bounded contexts were separated logically, but sometimes deployed in the same deployment unit, perhaps in a web server or an application server. In our modern day use of DDD, I’m seeing more people adopting DDD because it aligns with having separate deployments, such as in microservices. One thing to keep clear is that the essence of Domain-Driven Design is really still what it always was -- It's modeling a ubiquitous language in a bounded context. So, what is a bounded context? Basically, the idea behind bounded context is to put a clear delineation between one model and another model.



Quote for the day:


"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks