Just like code testing, Data Quality is one of things that we generally don't pay attention to until it comes and bites us, and when it does, its usually a customer that notices it and as always, we poor beleaguered developers get to pay the price. I'm starting into a Data Quality project, so I thought it might be good to have a talk about what it is, and how we can put some simple checks and balances in place to help us manage our data, and improve its quality. ... To bring your system to the next level, and make it really robust, you could consider building these kinds of checks into your system whenever data is changed or ingested. While you can get very detailed and domain specific with the following, in general, its possible to be quite generic about data at this level and combine these rules and checks to dramatically improve the quality of your data. The bottom line is we are seeking to ensure our data is in a clean state before allowing it to proceed into production or analysis.
Another lane on our highway is related to architecture and non-functional requirements. One day, you may decide to invest your time into common practices of solving scalability issues of any kind, have a look how high availability is being achieved in some modern and popular products, what helps one solution survive high load, etc. If you are a fan of patterns, then you could have a look at classic patterns first, and then switch to modern ones, recall old school enterprise patterns, or read a book about integration patterns. If you like the web, then the hype is about monolith vs. SOA vs. microservices, so you can invest time into that area. If you are in a big data world, then and kappa architectures might be interesting to you, too. Another valuable effort might be to spend time reviewing architectures of successful products.
The forensics company claims it can download almost every shred of data from almost any device in a matter of seconds -- on behalf of police intelligence agencies in over a hundred countries -- to help solve crimes. It does that by taking a seized phone from the police, then plugging it in, and extracting messages, phone calls, voicemails, images, and more from the device using its own proprietary technology. It then generates an extraction report, allowing investigators to see at a glance where a person was, who they were talking to, and when. We obtained a number of these so-called extraction reports. One of the more interesting reports by far was from an iPhone 5 running iOS 8. The phone's owner didn't use a passcode, meaning the phone was entirely unencrypted. Here's everything that was stored on that iPhone 5, including some deleted content.
In rare cases, big problems are quickly solved. More often, large-scale problems require time to fix. And time is something many executives believe is in short supply. As Bob Richards, a vice president for a global manufacturer headquartered in Switzerland notes, “True change -- from a problem-solving standpoint -- takes a lot longer than is usually allowed in companies. You need to get folks involved in identifying the problem, how the problem was created, and then get their input on how to solve the problem.” Richards has devised a simple three-step process for staving off executive impatience that leads to killing off promising projects. He acknowledges an executive’s difficult position, saying, “When you’re in a leadership role, it is one problem after the next and your role is to get problems resolved—and quickly.”
The ability to track packets through the network is necessary, but it's not enough. With virtualization, network and application management have become tightly interdependent. When an application starts up, virtualized networking management requires creation of virtual components and allocates network paths among application virtual machines (VMs). These VMs may execute on different servers, and may move from server to server in response to shifting loads. When a VM moves, network traffic must be redirected to support the new configuration. In the meantime, performance monitors must report whether applications are meeting service-level agreements and track server and network utilization rates. They collect statistics that show use over time so managers can spot components that are nearing limits.
Reactive programming is the new kid on the block, offering built-in solutions for some of the most difficult concepts in programming including concurrency management and flow control. But if you work on an application development team there's a good chance you are not using reactive and so you might have questions - how do I get there, how do I test it, can I introduce it in phases? ... In the reactive world we aim to bring a blocking application to a non-blocking state. (A blocking application is one that blocks when performing I/O operations such as opening TCP connections.) Most of the legacy Java APIs for opening sockets, talking to databases (JDBC), file/inputStream/outputStream, are all blocking APIs. The same is true about the early implementations of the Servlet API and many other Java constructs.
There are many different platforms, programming languages, and tools that you can learn. Dfrobot* created a tank robot platform called Devastator that contains the Romeo* controller board. This board was modified for use with the Intel® Edison compute module to bring more capability to the kit with an increased number of I/O’s, integrated WiFi, USB Host, servo control, and increased processing power. The kit can be programmed using the Arduino* IDE and a USB connection out of the box. This article describes another method of programming the robot using the Intel® XDK to program the robot over WiFi, Node.js*, and the MRAA library. In particular, the article will discuss about the tools used, the Romeo controller board, mapping peripheral pins, creating an Intel XDK project, and the implementation of the sensor & actuator components for the robot.
It’s just the latest shakeup in the IT equipment leasing industry which has also been reeling from reductions in the cost of IT equipment and increased adoption of cloud computing. “The profits of the companies that lease IT equipment are under pressure,” Kirz says. “At the same time, cloud adoption is shifting lessor relationships from the end-client to the cloud provider, and many cloud providers are building their own data centers with commodity equipment, thus shrinking the lessors’ market size.” ... In the face of these trends, a number of large independent leasing companies have recently sold themselves to large banks resulting in market consolidation. Crestmark Bank bought equipment-leasing company TIP Capital in late 2014. Huntington Bank acquired Macquarie Equipment Finance last April. And Wells Fargo purchased GE Capital Vendor Finance in March.
With Big Data processing power and IOT insights, repairs and maintenance can be optimized to avoid delays, stoppages, and safety risks. These technologies are used to pinpoint precisely what leads up to an issue. Often, the issues can be resolved instantly and remotely, before they escalate. In this instance, Big Data and IOT sensor input simplify the process of obtaining appropriate data, which gives companies the chance to react effectively and avoid crisis situations. Manufacturing companies are reaping huge benefits by deploying Big Data technologies. Automakers worldwide use data analytics to monitor the cost of steel and other raw materials, helping them identify when they can purchase at the best price point. How can this be done? A database of several suppliers is built on a Hadoop framework; this tracks which supplier offers the most competitive price and can deliver at the optimal time. The result? Car manufacturing costs are reduced significantly.
"AI/machine learning can help close the diversity gap, as long as it is not susceptible to human bias. For example, recruiting contact center employees could provide AI/machine learning models with the historical application forms of hired contact center employees with high customer satisfaction scores. This allows the model to pick up on the subtle application attributes/traits and not be impacted by on-the-job, human biases," Alexander says. By simply using an automated, objective process like this, it's possible to drastically reduce the scope for human bias. If, for example, fairly trained AI/machine learning tools are used to whittle an applicant pool down from 100 applicants to the final 10 interviewees, that means that 90 percent of the pool reduction would be done in a process immune to any human biases, Alexander explains.
Quote for the day:
"Motivation is what gets you started. Habit is what keeps you going." -- Jim Ryun