How to define load models for continuous testing
A realistic workload model is the core of a solid performance test. Generating load that does not reflect reality will only give you unrealistic feedback about the behavior of your system. That's why analyzing the traffic and application to generate your performance strategy is the most important task for creating your performance testing methodology. To help one of my clients build a realistic performance testing strategy, I built a program that extracts the use of its microservices in production. The objective was to present the 20% of calls that represent 80% of the production load. Through this extraction, the program guides the project in building a continuous testing methodology for the client's main microservices. One of the biggest limitations is the lack of information stored in the http: logs or the data stored in APM products. Unfortunately, there is just too much missing information to automatically generate the load testing scripts. Technically, with a tool like my prototype, you'll have everything you need to build test scripts, test definitions, and test objectives.
Data Modeling with Indexes in RavenDB
When it comes to data modeling, indexes in relational databases usually don’t enter the equation. However, in RavenDB, indexes serve not only to enhance query performance but also to perform complex aggregations using map/reduce transformations and computations over sets of data. In other words, indexes can transform data and output documents. This means they can and should be taken into account when doing data modeling for RavenDB. Index definitions are stored within your codebase and are then deployed to the database server. This allows your index definitions to be source-controlled and live side by side with your application code. It also means indexes are tied to the version of the app that is leveraging them making upgrades and maintenance easier. Indexes can be defined in C#/LINQ or JavaScript. For this article, we’ll use JavaScript to show off this feature of RavenDB. It’s worth noting that JavaScript support for indexes supports up to ECMAScript 5 but this will increase as the JavaScript runtime RavenDB uses adds support for ES2015 syntax in the near future.
How to Build a Culture Bridge to the Cloud
The DevOps culture necessary to effectively use open source, cloud native technologies has fundamentally changed software and team processes. It is expanding how we work and think. For some, this presents an exciting opportunity. Others approach it with more trepidation. Startups, in general, are on board. They don’t have entrenched technology that needs to be maintained and upgraded. They are also able to hire people whose skill sets are a good fit with newer technologies. For enterprises, it’s a bit tougher. They have massive investments in workhorse technologies and platforms such as Java and WebLogic. But they also have IT teams with deep heritage and operational knowledge in building, deploying, running and maintaining applications over decades. Understandably, their developers don’t necessarily want to become experts in infrastructure and in projects such as Kubernetes. They may not see the value in having novices muck around with it. As long as the developer and operations teams remain separate, they each have a measure of power and a measure of comfort.
Machines and devices are everywhere, connected—and multiplying. These are the “things” of the Internet of Things, and today there are nearly three devices attached to the internet for every human on the planet. By 2025 that ratio will soar to 10 to 1. For consumers, that means their thermostats and refrigerators can be connected to real-time, sophisticated analytics engines that automatically adjust them to be more efficient and save more money. But what does that mean for businesses? Well, just as it’s doing for consumers, IoT is helping businesses streamline operations, save money and time with real-time, actionable intelligence, and prevent problems with predictive analytics. But there’s a dark side to IoT. Frankly, it’s the concerning underbelly that exists in all connected technologies: lacking security. We already see massive DDoS attacks driven by IoT devices. Experts concede that is just the tip of the iceberg. In all, analysts project the global IoT market to exceed the $1 trillion mark in 2022. Today, companies in every industry rely on IoT as part of their business strategy.
GDPR at a critical stage, says information commissioner
“We find ourselves at a critical stage. For me, the crucial, crucial change the law brought was around accountability. Accountability encapsulates everything the GDPR is about.” Denham said the GDRP enshrines in law an onus on companies to understand the risks that they create for others with their data processing, and to mitigate those risks. It also formalises the move away from box ticking to seeing data protection as something that is part of the cultural and business fabric of an organisation, and it reflects that people increasingly demand to be shown how their data is being used, and how it is being looked after, she added. However, she said this change is not yet evident in practice. “I don’t see it in the breaches reported to the ICO. I don’t see it in the cases we investigate, or in the audits we carry out,” she said. Denham said this is both a problem and an opportunity. “It’s a problem because accountability is a legal requirement, it’s not optional. But it is an opportunity because accountability allows data protection professionals to have a real impact on that cultural fabric of your organisation,” she said.
Gaming company boosts call center employee engagement
Many companies use design thinking to improve the customer experience. After finding it useful in the CX realm, businesses now try to apply similar approaches to improve employee engagement. Electronic Arts (EA) Inc. found this approach helpful to improve the engagement of call center employees who typically experience the brunt of customer complaints. "No one ever calls us when something good is happening," said Abby Eaton, manager of employee experience at EA. "They are calling because something has gone wrong and they are already frustrated, so the complexity of the advisers' jobs is challenging." Design thinking can help improve the design of a space, physical products and applications and has been a trend since the 1990s. Now, companies are applying this same approach to improve applications in the workplace -- cutting costs and improving worker productivity, said Parminder Jassal, Work and Learn Futures group director at the Institute for the Future, a think tank in Palo Alto, Calif.
Innovation Nation: Blockchain much bigger than Bitcoin
Transparency works well for Bitcoin's blockchain but it might not suit say a large company's supply-chain system where it doesn't want suppliers and contractors to see each other's transactions. Immutability is a double-edged sword: if a fraudulent or erroneous transaction is recorded on the blockchain, there's no easy way to amend or delete it. The only way to fix that is to go back in time on the blockchain, and start again at that point to invalidate the transaction, provided everyone in the network agrees to do that. This effectively creates a new version of the software, and thus a new cryptocurrency that's not compatible with the older one. Not being able to delete or amend information could also make blockchain data stores incompatible with tightening global privacy rules that give individuals the right to "be forgotten" and have their details deleted if they so wish. Muir says we don't know the answer to that yet. Likewise, accessing blockchain data requires the use of a digital cryptographic key that has to be kept secure.
5 mistakes that doom a DevOps transformation from the start
The delivery pipeline in DevOps consists of feedback loops that allow you to inspect, reflect, and decide if you are still doing the right things in the right way. As you get better and smarter and learn more, you'll see ways to improve, to optimize, to cut out steps that are not providing value. Often those improvements require some investment and extra effort to implement. If you don't take the time to fix the pipeline when you see the ways to improve, you are just investing in a wasteful process. You are doing the process for the sake of the process, not to add the maximum value to what you are delivering. The sooner you improve, the sooner you reap the benefits of that improvement. It isn't just a matter of reviewing the process twice a year or every quarter. Continuous improvement is a cultural shift that says everyone should get better all the time. Every time you go through the process, you get a little better and learn a little more.
A Glimpse into WebAssembly
One of the biggest features WebAssembly has been touting is performance. While the overall performance is trending to be faster than JavaScript, the function-to-function comparison shows that JavaScript is still comparable in some benchmarks, so your mileage may vary. When comparing function execution time, WebAssembly is predicted to be about 20-30% faster than JavaScript, which is not as much as it sounds since JavaScript is heavily optimized. At this time, the function performance of WebAssembly is roughly about the same or even a little worse than JavaScript — which has deflated my hopes in this arena. Since WebAssembly is a relatively new technology, there are probably a few security exploits waiting to be found. For example, there are already some articles around exploiting type checking and control flow within WebAssembly. Also, since WebAssembly runs in a sandbox, it was susceptible to Spectre and Meltdown CPU exploits, but it was mitigated by some browser patches. Going forward, there will be new exploits. Also, if you are supporting enterprise clients using IE or other older browsers, then you should lean away from WebAssembly.
Is Hadoop’s legacy in the cloud?
What many people failed to realise is that Hadoop itself is more of a framework than a big data solution. Plus, with its broad ecosystem of complementary open source projects for most businesses Hadoop was too complicated. It needed a level of configuration and programming knowledge that could only be supplied by a dedicated team to fully leverage it. Even when there was a dedicated internal team, it sometimes needed something extra. For instance, one of Exasol’s clients, King Digital Entertainment, makers of the Candy Crush series of games, couldn’t get the most out of Hadoop. It wasn’t quick enough for interactive BI queries that the internal data science team demanded. They needed an accelerator on a multi-petabyte Hadoop cluster which allowed their data scientists to interactively query the data. The world of data warehousing has changed in recent years, and Hadoop has had to adapt. The IT infrastructure of 2009-2013, when Hadoop was at the peak of its fame, differs greatly from the IT infrastructure of today.
Quote for the day:
"Leaders need to strike a balance between action and patience." -- Doug Smith
No comments:
Post a Comment