Each private cubicle sits on short legs, enabling small warehouse robots to scuttle around underneath them. Then, the robots can pick up the cubes and move them around the office based on what each person and team needs for the day. For instance, if you have a day of heads-down work, you’d get assigned a private cubicle so you can focus. If you have a day full of meetings, and you don’t need private space, your cube combines with other cubes to create a larger space in which to work with your colleagues. The robots shift the office in real time to make this happen. ... For now, the idea seems farfetched, but Rapt’s design principal and CEO David Galullo believes it’s closer than you might think. He says the studio is working with clients who are interested in how a workplace can be reconfigured over a weekend to respond to a team’s changing needs. The key is to keep the office as spare as possible, so you can easily move things around, which he believes is one reason that many companies prefer an open plan
An HSBC spokeswoman tells Information Security Media Group that less than 1 percent of HSBC's U.S. customers were affected by the data breach. The bank declined to quantify how many U.S. customers it has. But The Telegraph reports that HSBC manages about 1.4 million U.S. accounts, meaning 14,000 customers may have been affected. "HSBC regrets this incident, and we take our responsibility for protecting our customers very seriously," the bank says in a statement sent to ISMG. "We responded to this incident by fortifying our log-on and authentication processes, and implemented additional layers of security for digital and mobile access to all personal and business banking accounts," the statement notes. "We have notified those customers whose accounts may have experienced unauthorized access and are offering them one year of credit monitoring and identify theft protection service." HSBC's data breach notification to victims also notes: "You may have received a call or email from us so we could help you change your online banking credentials and access your account."
An SSD in a laptop will often go for long periods of time without any IO. It has plenty of time to perform garbage collection and similar functions. An enterprise SSD however, may face a full-time 24×7 workload and never have idle time for garbage collection type of functions but in the enterprise, it is consistent performance which is more important than peak levels of performance. Enterprises need SSD suppliers to create drives that focus more on the consistent delivery of IO (or IOPS) all the time no matter how heavy the workload rather than peak levels of performance that look good on a marketing datasheet. The key challenge to delivering consistent performance is how the SSD handles write IO, especially under heavy random workloads. With each write, flash media needs to find available space to place that write. If there is no space available it has to make space “on the fly,” by rearranging data within cells to create contiguous space for the new write. Garbage collection routines are supposed to make this space available in advance, but they are not always afforded the time to complete their tasks.
MongoDB 4.0 adds support for multi-document ACID transactions. But wait... Does that mean MongoDB did not support transactions until now? No, actually MongoDB has always supported transactions in the form of single document transactions. MongoDB 4.0 extends these transactional guarantees across multiple documents, multiple statements, multiple collections, and multiple databases. What good would a database be without any form of transactional data integrity guarantee? ... Multi-document ACID transactions in MongoDB are very similar to what you probably already know from traditional relational databases. MongoDB’s transactions are a conversational set of related operations that must atomically commit or fully rollback with all-or-nothing execution. Transactions are used to make sure operations are atomic even across multiple collections or databases. Thus, with snapshot isolation reads, another user can only see all the operations or none of them.
Data security and data management are much more complicated. Every member of the Blockchain must preserve and protect a private key. If that key is ever compromised by an unauthorized party, there is little that can be done to revoke the compromised key. Perhaps just as bad, if the key is lost (e.g., accidentally deleted), that user's access to the system is permanently lost as well. It is estimated, for example that 20% of all the Bitcoins in the world are lost in this manner. Finally, by itself, Blockchain doesn't really offer much for data management. Rather, it enables new forms of data management. Supply chain is a great example where Blockchain appears to be having some great success. When you look at world-wide, complicated supply chains, keeping track of data between hundreds, or even thousands, of inter-operating vendors is extremely challenging. Creating a Blockchain for these participants to create data of their own, and track related data of others, is a fantastic fit.
The sustained misdirection further underscores the fragility of BGP, which forms the underpinning of the Internet's global routing system. In April, unknown attackers used BGP hijacking to redirect traffic destined for Amazon’s Route 53 domain-resolution service. The two-hour event allowed the attackers to steal about $150,000 in digital coins as unwitting people were routed to a fake MyEtherWallet.com site rather than the authentic wallet service that got called normally. When end users clicked through a message warning of a self-signed certificate, the fake site drained their digital wallets. ... “While one may argue such attacks can always be explained by ‘normal’ BGP behavior, these, in particular, suggest malicious intent, precisely because of their unusual transit characteristics—namely the lengthened routes and the abnormal durations,” the authors wrote. The Canada to South Korea leak, the report said, lasted for about six months and started in February 2016.
As mentioned, the processor is relatively capable for the price, with a dual-core 2.0GHz Arm Cortex-A72 paired with a quad-core 1.5GHz Arm Cortex-A53 in a Big.LITTLE configuration, which swaps tasks between cores for power efficiency. Smooth 4K video playback should be possible courtesy of the HDMI 2.0 port and Mali-T864 GPU. Fast SSD storage is also an option, via an M.2 interface supporting up to a 2TB NVMe SSD, and if the onboard SD card storage is too slow, there's an option to add up to 128GB eMMC storage to the board. Though the memory is relatively fast — 64-bit, dual-channel 3,200Mb/s LPDDR4 — only 1GB is available on the base $39 model, ranging up to 4GB for $65. There's a decent selection of ports, with four USB Type-A ports, one USB 3.0 host, one USB 3.0 OTG, and two USB 2.0 host. For those interested in building their own homemade electronics, there's also a 40-pin expansion header for connecting to boards, sensors and other hardware. Though this header's pin layout is similar to that of the Pi, the Rock Pi's maker said it wasn't possible to make it "100% GPIO compatible".
The dual forces of technological (and data) innovation and shifts in the regulatory and broader sociopolitical environment are opening great swaths of this financial-intermediation system to new entrants, including other large financial institutions, specialist-finance providers, and technology firms. This opening has not had a one-sided impact nor does it spell disaster for banks. Where will these changes lead? Our view is that the current complex and interlocking system of financial intermediation will be streamlined by the forces of technology and regulation into a simpler system with three layers. ... Our view of a streamlined system of financial intermediation, it should be noted, is an “insider’s” perspective: we do not believe that customers or clients will really take note of this underlying structural change. The burning question, of course, is what these changes mean for banks.
New datasets result in training and evolving new ML models that need to be made available to the users. Some of the best practices of continuous integration and deployment (CI/CD) are applied to ML lifecycle management. Each version of an ML model is packaged as a container image that is tagged differently. DevOps teams bridge the gap between the ML training environment and model deployment environment through sophisticated CI/CD pipelines. When a fully-trained ML model is available, DevOps teams are expected to host the model in a scalable environment. ... The rise of containers and container management tools make ML development manageable and efficient. DevOps teams are leveraging containers for provisioning development environments, data processing pipelines, training infrastructure and model deployment environments. Emerging technologies such as Kubeflow and MlFlow focus on enabling DevOps teams to tackle the new challenges involved in dealing with ML infrastructure.
In the old days, iframes were used a lot. Not only for embedding content from other sites, cross domain Ajax or hacking an overlay that covered selects but also to provide boundaries between page zones or mimic desktop-like windows layout… Window.postMessage method was introduced into browsers to enable safe cross-origin communication between Window objects. The method can be used to pass data between iframes. In this post, I’m assuming that the application with iframes is old but it can be run in Internet Explorer 11, which is the last version that Microsoft released (in 2013). From what I’ve seen, it’s often the case that Internet Explorer has to be supported but at least it’s the latest version of it. ... Thanks to postMessage method, it’s very easy to create a mini message bus so events triggered in one iframecan be handled in another if the target iframe chooses to take an action. Such approach reduces couplingbetween iframes as one frame doesn't need to know any details about elements of the other
Quote for the day:
"If you only read the books that everyone else is reading, you can only think what everyone else is thinking" -- Haruki Murakami