Compromises and trade-offs in software are unavoidable, and Evans encouraged everyone to accept that "Not all of a large system is going to be well designed." Just as "good fences make good neighbors", bounded contexts shield good parts of the system from the bad. It therefore stands to reason that not all development will be within well-defined bounded contexts, nor will every project follow DDD. While developers often lament working on legacy systems, Evans places high value on legacy systems, as they are often the money makers for companies. His encouragement for developers to "hope that someday, your system is going to be a legacy system" was met with applause.
At its most basic level, the monetary system is built around the idea of storing and transferring value. Banks are not going to disappear; there are still high-level efficiencies and advantages to having banks aggregate stored value and deploy it at a targeted rate of return. For example, a bank can write thousands of mortgages and then securitize a portion of said mortgages; this is never going to be a process suitable for the crowdfunding model. Blockchain technology creates numerous benefits across industries and applications, especially in regard to value-transfer. Banks can realize extraordinary efficiencies, streamline their back-office functions and reduce risk in the process. Smart contracts introduce the added dynamic of constraints and conditional operations for transferring or storing value only when certain conditions have been met and verified.
In the real world, this might be a machine going into different fault and run states, where the effect of an input on the machine's state depends on the state the machine is in at the time. If I go far enough back in time, I realize that my system did receive an input "A", and so by the rules of my system, the later "B" results in my model producing the output "X". However if I don't go back far enough, I will think that I only got a "B", and the output should be "Y". But how far back is "far enough"? The input "A" might have arrived 100 milliseconds ago, or it might have arrived yesterday, or just before the week-end. Which means that I cannot just pick up and run my model over a selected time period any time I want to get an answer -- apart from the sheer impracticality of crunching the numbers while the User waits for an answer.
Data is the fuel of the new AI-based economy. Companies, consumers and web-connected devices create terabytes of data that enforce AI research and innovation. Some companies, like Google and Facebook, acquire data thanks to their users who provide ratings, clicks and search queries. For other companies, data acquisition may be a complicated process, especially if they need an enterprise solution for a limited number of members instead of a one-size-fits-all solution for millions of users. Luckily, the emerging AI markets offer a broad range of options for companies to kickstart their AI strategies. As a venture studio partner, I see startups struggling with sourcing the initial data sets for their business problems. That's why I've listed the most popular ways young companies can source data for their AI businesses.
Many startups can afford to be scrappy at the start and only have a few employees while gaining momentum; when your product is a connected device it is more difficult to build a small team with the range of skills needed to launch a successful product. Luckily, there are plenty of external resources available to these companies that can help. If a founding team is strong with hardware, they can use an agency in order to get their first software suite built. There are also services that they can leverage to help with the build and distribution chain. Any place where work can be offloaded in order to focus on value increases their chances of success. They can then start hiring out a team to save money once they have traction.
The alliance says that the groups' open-source tools and property will help the enterprise register IoT devices and create event logs on decentralized systems, which in turn will lead to a trusted IoT ecosystem which links cryptographic registration, "thing" identities, and metadata ... "The world is beginning to recognize the potential of blockchain technology to fundamentally reshape the way business is done globally - and we're still just scratching the surface," said Ryan Orr, CEO of Chronicled. "At this early stage we think it's vitally important to establish an inclusive framework that ensures openness, trust, and interoperability among the many parties, in both the public and private sectors, that we believe will begin to adopt blockchain technology over the next several years."
In general, you know that when you have public goods, public goods are going to be in very many cases underfunded. So the interesting thing with a lot of these blockchain protocols is that for the first time you have a way to create protocols and have protocols that actually manage to fund themselves in some way. If this kind of approach takes off, potentially, it could end up drastically increasing the quality of bottom-level protocols that we use to interact with each other in various ways. So ethereum is obviously one example of that, we had the ether sale, and we got about $8 to $9 million by, I guess, basically selling off a huge block of ether. If you look at lots of cryptocurrencies, lots of layer-two kind of projects on top of ethereum, a lot of them tend to use a similar model.
It remains to be seen whether the information bottleneck governs all deep-learning regimes, or whether there are other routes to generalization besides compression. Some AI experts see Tishby’s idea as one of many important theoretical insights about deep learning to have emerged recently. Andrew Saxe, an AI researcher and theoretical neuroscientist at Harvard University, noted that certain very large deep neural networks don’t seem to need a drawn-out compression phase in order to generalize well. Instead, researchers program in something called early stopping, which cuts training short to prevent the network from encoding too many correlations in the first place.
Continuous delivery is all about improving the stability and speed of your release process, so unsurprisingly you should measure stability and speed! Those are intangibles, but they’re not hard to measure. In How To Measure Anything, Douglas Hubbard shows how to use clarification chains to measure intangibles - you create tangible, related metrics that represent the same thing. Luckily for us, the measures have been identified for us. In the annual State Of DevOps Report Nicole Forsgren, Jez Humble, et al. have measured how stability and throughput improve when organisations adopt continuous delivery practices. They measure stability with Failure Rate and Failure Recovery Time, and they measure throughput with Lead Time and Frequency. I’ve been a big fan of Nicole and Jez’s work since 2013
No matter their place in a lumbering bureaucracy or how many eye-rolls they may inspire among developers, these people are smart, competent, and valuable to their organizations. So my opinions and criticisms have nothing to do with the humans involved. That said, I think this role is on the decline, and I think that’s good. This role exists in the space among many large software groups. In the old days, they coordinated in elaborate, mutually dependent waterfall dances. These days, they “go agile” with methodologies like SAFe, which help them give their waterfall process cooler, more modern sounding names, like “hardening sprint” instead of “testing phase.” In both cases, the enterprise architect has a home, attending committee-like meetings about how to orchestrate the collaboration among these groups.
Quote for the day:
"Your excuses are nothing more than the lies your fears have sold you." -- Robin Sharma