Many network infrastructure vendors are developing automation technology aimed primarily, if not solely, at their own products, rather than multi-vendor environments. While most enterprises use two or three different automation tools in their initiatives, 42 percent say that an automation tool aimed at a single vendor is part of their strategy. In fact, 26 percent said a single-vendor automation tool is the most important part of their automation technology strategy. ... The most important ZTP feature, according EMA’s survey, is software-image auto-updates and verifications. Many enterprises are also interested in being able to custom provision and configure devices via scripts and the ability to unify ZTP network provisioning with compute and storage infrastructure in data centers. Not every network vendor offers embedded ZTP features on their platforms, and most only offer them on their latest generation products. Enterprises with older equipment may switch to a new vendor during a refresh, and ZTP features may be a contributing or leading driver of that vendor switch.
Group-IB, which has analyzed the cards listed for sale, says more than 98 percent appear to have been issued by Indian banks, with a single bank accounting for more than 18 percent of all of the dumps. About 1 percent of the cards appear to have been issued to Columbian banks. What's unusual about this sale is that so many payment cards have been uploaded at once. "Databases are usually uploaded in several smaller parts at different times," says Ilya Sachkov, CEO and founder of Group-IB, which was originally headquartered in Moscow. While that is unusual, so too is the sheer scale of what's being offered all at once. "This is indeed the biggest card database encapsulated in a single file ever uploaded on underground markets at once," he says. "What is also interesting about this particular case is that the database that went on sale hadn't been promoted prior either in the news, on card shop or even on forums on the dark net. The cards from this region are very rare on underground markets. In the past 12 months, it is the only one big sale of card dumps related to Indian banks."
Containers are designed chiefly to isolate processes or applications from each other and the underlying system. Creating and deploying individual containers is easy. But what if you want to assemble multiple containers—say, a database, a web front-end, a computational back-end—into a large application that can be managed as a unit, without having to worry about deploying, connecting, managing, and scaling each of those containers separately? You need a way to orchestrate all of the parts into a functional whole. That’s the job Kubernetes takes on. If containers are passengers on a cruise, Kubernetes is the cruise director. Kubernetes, based on projects created at Google, provides a way to automate the deployment and management of multi-container applications across multiple hosts, without having to manage each container directly. The developer describes the layout of the application across multiple containers, including details like how each container uses networking and storage. Kubernetes handles the rest at runtime. It also handles the management of fiddly details like secrets and app configurations.
The effect of having computer systems wirelessly or directly transmit data to the brain isn't known, but related technologies such as deep brain stimulation -- where electrical impulses are sent into brain tissue to regulate unwanted movement in medical conditions such as dystonias and Parkinson's disease -- may cause personality changes in users. And even if BCIs did cause personality changes, would that really be a good enough reason to withhold them from someone who needs one -- a person with paraplegia who requires an assistive device, for example? As one research paper in the journal BMC Medical Ethics puts it: "the debate is not so much over whether BCI will cause identity changes, but over whether those changes in personal identity are a problem that should impact technological development or access to BCI". Whether regular long-term use of BCIs will ultimately effect users' moods or personalities isn't known, but it's hard not to imagine that technology that plugs the brain into an AI or internet-level repository of data won't ultimately have an effect on personhood.
With communication, previous attempts used infrared lights or radio waves, but if you have many robots in a small area, these signals can conflict. The MIT team instead created a cube devoid of arms, using inertial forces to move the robots. These forces are the result of a mass inside each cube that throw themselves against the side of the module, causing the block to rotate or move in 24 different directions, with there being six faces, the paper added. "There's a relatively large field of other people building sort of similar robots," Romanishin said, "But the two main unique parts about our robots are how they move, which is using angular momentum from what we call a reaction wheel, and the way it uses magnets. It uses them in a special way that is potentially a really scalable and cheap solution for identifying hundreds of thousands of elements in a small space." "One of the big things that we looked at was how do you make the robots move relative to each other? It's a really challenging, from a design standpoint and a physics standpoint," Romanishin added.
In simple terms, regression testing can be defined as retesting a computer program after some changes are made to it to ensure that the changes do not adversely affect the existing code. Regression testing increases the chance of detecting bugs caused by changes to the application. It can help catch defects early and thus reduce the cost to resolve them. Regression testing ensures the proper functioning of the software so that the best version of the product is released to the market. However, creating and maintaining a near-infinite set of regression tests is not feasible at all. This is why enterprises are focusing on automating most regression tests to save time and effort. ... Whenever there is a change in the app or a new version is released, the developer carries out these tests as a part of the regression testing process. First, the developer executes unit-level regression tests to validate the code that they have modified along with any new test that is created to cover any new functionality. Then the changed code is merged and integrated to create a new build of AUT. After that, smoke tests are performed to assure that the build that we have created in the previous step is good before any additional testing is performed.
How the replication works is also very different. Object replication is done at the object level vs the block-level replication of cloud block storage and typical RAID systems. Objects are also never modified. If an object needs to be modified it is just stored as a new object. If versioning is enabled, the previous version of the object is saved for historical purposes. If not, the previous version is simply deleted. This is very different from block storage, where files or blocks are edited in place, and the previous versions are never saved unless you use some kind of additional protection system. Cloud vendors offer object-storage services, which include Amazon's Simple Storage Service (S3), Azure’s Blob Store, and Google’s Cloud Storage. These object-storage systems can be set up to withstand even a regional disaster that would take out all availability zones. Amazon does this using cross-region replication that must be configured by the customer. Microsoft geo-redundant storage includes replication across regions, and Google offers dual-region and multi-region storage that does the same thing.
One obvious potential culprit for the attacks against Georgia would, of course, be Russia, which has previously launched politically motivated cyberattacks against the government sectors of former Soviet states, including Estonia. Georgia is a U.S. ally, and since 2011, it has been an "aspirant country" in terms of its potential membership in NATO. It's also been engaged in a months-long spat with Moscow. After a Russian legislator's address to the Georgian parliament triggered protests, Georgia on June 20 temporarily blocked all flights originating from Russia. In response, Russian President Vladimir Putin on June 21 ordered that starting July 8, Russian carriers were barred from operating flights between Russia and Georgia. The Monday cyberattack against Georgia echoes cyberattacks launched against the country in 2008, weeks before the country was invaded by Russia over Georgia's "breakaway provinces" of South Ossetia and Abkhazia. At the time, Moscow said it wasn't responsible for the cyberattacks, but it suggested that some Russian individuals may have been independently involved.
Figuring out how to handle null pointers is a big problem for modern language design. Sometimes I think that half of the Java code I write is checking to see whether a pointer is null. The clever way some languages use a question mark to check for nullity helps, but it doesn’t get rid of the issue. A number of modern languages have tried to eliminate the null testing problem by eliminating null altogether. If every variable must be initialized, there can never be a null. No more null testing. Problem solved. Time for lunch. The joy of this discovery fades within several lines of new code because data structures often have holes without information. People leave lines on a form blank. Sometimes the data isn’t available yet. Then you need some predicate to decide whether an element is empty. If the element is a string, you can test whether the length is zero. If you work long and hard enough with the type definitions, you can usually come up with something logically sound for the particular problem, at least until someone amends the specs. After doing this a few times, you start wishing for one, simple word that means an empty variable.
Unsolved problems belong on the backlog. In theory, the Product Owner processes all backlog items, dismisses the irrelevant and prioritizes the most important ones into sprints, until the backlog is empty and the project is done. But in practice, that’s not what happens. The backlog just grows forever. It collects items that can wait, together with technical debt and hot potatoes which cannot simply be dismissed. To developers, the backlog is a spillway to keep their job doable. Agile says: whatever you don't know yet, or can do without for now, park it on the backlog, and forget about it. It will reemerge when needed. For the most part, this works. It is the power of Agile. But by the time unsolved problems reemerge, hot potatoes have become too hot to handle, and technical debt has become too expensive to repay. Implementation effort has grown far beyond the available resources. This can be prevented by adding some core insights and making a few small but essential changes to the Agile approach.
Quote for the day:
"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward