Koley predicted growth in in the use of server disaggregation, which is separating the compute and memory so those resources can be allocated according to the demands of specific workloads. "We are betting big on that," he said. "I believe it will become big because we are really treating open source as fundamental to our strategy. You will continue to see more products that make it easier for users to build features and applications on top of it making it more powerful and useful." ... “Intent networking is here and it's transforming how operations have done. The form it's going to take is how am I going to inform the network as a whole?" Koley said. "When I talk to CIOs or CEOs, they ask ‘How can I manage my infrastructure like Google or AWS?’ I tell them to write software so you don't need a ton of developers to operate it. We're betting big on that." He said intent-based networking describes how organizations' infrastrcture behaves. "It's an important tool because when you're doing complex automation, you want the software layer that takes care of the intent," he said.
Testing is not as easy as it is often presumed to be! It holds great significance to any software development process. For any software tester, a knack for analytics and logical application of concepts is necessary. When testing software, it is imperative to analyze the given situation and accordingly create a solution for the same. The thought process and right mindset will help break down the problem into parts, making it easy to examine the elements of the problem and its relationships. ... Testing can be a long and tiring process sometimes requiring the tester to sit down for hours and analyze a certain situation. But, after spending these hours it is crucial to have the right communication sent to the higher authorities. This leads to the correct decisions being taken in terms of the release and timelines. A good report along with effective communication is vital to establish healthy transparency & trust of all the stakeholders as it conveys about all the actions taken, the bugs found, the bugs solved and any other issues encountered.
Market makers, by contrast, embrace risk, tolerate failure (so long as they learn from it), and continually spend M&A and R&D dollars to create demand where none presently exists. They cultivate a workforce composed of people with diverse backgrounds and skill sets, who reflect the rapidly changing and expanding population they serve. This enables them to better understand and anticipate the needs of the broadest possible audience. These companies don’t just invent new products. They reinvent themselves and swiftly adapt to a rapidly changing world. Amazon, for example, could have rested on its laurels, first as a dominant bookseller, then as a dominant e-tailer. Instead, it has expanded into cloud computing, physical grocery sales, and, more recently, pharmaceutical retail with its $1 billion deal to buy PillPack. Similarly, Google could have restricted itself to its highly profitable search engine. Yet with Android, it created a new digital platform.
The value of the cloud for us, for instance, in our factory use case is that I'm pushing very large volumes of data up into a shared cloud environment where it is easier for me to have a contract manufacture partner or a downstream enterprise customer interact with that data and have access to that data, versus the old way of providing a VPN tunnel into my on-prem piece of hardware where they're competing with me from a resource perspective to access that data. The cloud allows for collaboration and connectivity to occur in a way that the old on-prem model really doesn't. In addition to that, [it enables] agility, scalability and flexibility. The other big difference is on prem is an asset -- there's a capital buy and capital depreciation schedule and a fix commitment, whereas the cloud allows the flexibility to move up and down in volume as needed. ... To pick up a workload and to move it to another environment is somewhat easy; the hard thing is then to go retune, rebuild all your runbooks and optimize for that experience.
Enterprises increased their investments in IoT by 4% in 2018 over 2017, spending an average of $4.6M this year. Nearly half of enterprises globally (49%) interviewed are aggressively pursuing IoT investments with the goal of digitally transforming their business models this decade. 38% of enterprises have company-wide IoT deployments today, and 55% have an IoT vision and are currently executing their IoT plans. ... The percent of enterprises scoring 75 or higher on the Intelligent Enterprise Index gained the greatest of all categories in the last 12 months, increasing from 5% to 11% of all respondents. The majority of enterprises are improving how well they scale the integration of their physical and digital worlds to enhance visibility and mobilise actionable insights. The more real-time the integration unifying the physical and digital worlds of their business models, the better the customer experiences and operational efficiencies attained.
Although it's crucial for data governance professionals to stay abreast of best practices for handling information, the entire staff at an organization should receive relevant training on the subject. That's because when regulated data gets exposed, malicious actions are not to blame the vast majority of the time. Metadata from Radar Inc. collected from 2016 and 2017 discovered more than 92 percent of incidents and 87 percent of breaches are unintentional or inadvertent. Given those extremely high percentages, it's unlikely the figures in 2019 will show a significant change across such a short period. So, a smart thing for companies to do is ensure all staffers who work with data in any capacity receive ongoing and up-to-date training about data governance and management. Then, instances of human error should decline as people become increasingly familiar with best practices and the expectations the company has for them to follow.
Serverless functions change the economic model of cloud computing. Customers are charged only for the resources used while a function is running. There’s no charge for idle time! This approach is different from the traditional one of deploying code to a user provisioned and managed virtual machine or container that is typically running 24x7 and which must be paid for even when it’s idle. Pay-per-use makes Oracle Functions an ideal platform for intermittent workloads or workloads with spiky usage patterns. ... Security is the top priority for Oracle Cloud services, and Oracle Functions is no different. All access to functions deployed on Oracle Functions is controlled through Oracle Identity and Access Management (IAM), which allows both function management and function invocation privileges to be assigned to specific users and user groups. And after they are deployed, functions themselves may access only resources on VCNs in their compartment that they have been explicitly granted access to.
Java implementations typically use a two-step compilation process. In other words, the source code is turned into bytecode by the Java compiler. The bytecode is then executed by the Java Virtual Machine (JVM). JVMs today use a technique called Just-in-Time (JIT) compilation to produce native instructions that the system's CPU can execute. This promotes the "write once, run anywhere" (WORA) approach that Sun espoused in Java's early days. The flexibility of bytecode provides a real boon to portability. Instead of compiling applications for every platform, the same code is distributed to every system and the JVM in residence manages it. The problem comes in when small footprint devices don't deal well with the overhead of interpretation that is required. In addition, the Java machine has grown considerably and is far too monolithic for small footprint applications that need to react quickly. As a result, we are seeing offshoots that involve significantly less overhead such as Avian and Excelsior JET that provide optimized native executables that sacrifice portability for performance.
The platform, called the Content Platform, is an event based system. It responds to events from different content authoring platforms and triggers a stream of processes run in discrete worker microservices. These services perform functions such as data standardization, semantic tagging analysis, indexing in ElasticSearch, and pushing content to external platforms like Apple News or Facebook. The platform also has a RESTful API, which combined with GraphQL, is the main entryway for front end clients and products. While designing the overall architecture, the team investigated what languages would fit the platform needs. Go was compared against Python, Ruby, Node, PHP, and Java. While every language had its strengths, Go best aligned with the platform’s architecture. Go's baked in concurrency and API support along with its design as a static, compiled language would enable a distributed eventing systems that could perform at scale.
Quote for the day:
"Leadership is the art of giving people a platform for spreading ideas that work" -- Seth Godin