Review: First 8-inch Windows tablet is a device that shouldn’t exist

My dissatisfaction with PC OEMs is something I have documented in the past. They offer a confusing array of products and tend to cut corners in the worst ways imaginable. The OEM response to Windows 8 has been to produce a wide range of machines sporting novel form factors to fit all sorts of niches, both real and imagined.

One niche that the OEMs haven’t tried to fill, however, has been sub-10-inch tablets. That’s not altogether surprising. Microsoft designed Windows 8 for screens of 10 inches or more, and initially the operating system’s hardware requirements had a similar constraint.

That decision looked a little short-sighted after the success of tablets such as the Google Nexus 7 and the iPad mini. Accordingly, Microsoft changed the rules in March, opening the door to a range of smaller Windows tablets.

The Acer Iconia W3 is the first—and currently the only—8-inch Windows tablet. That attribute alone makes it in some sense noteworthy. Sadly, it’s about the only thing that does.

Spec-wise, this is another Intel Clover Trail tablet, and its internals are basically the same as the devices that launched last year (such as its bigger brother, the Acer Iconia W510). This means 1.8 GHz, dual core, four thread Intel Atom Z2760 CPU, 2 GB RAM, 64 GB flash storage (which with Acer’s default partitioning leaves a little over 29 GB usable), front and rear cameras, Bluetooth 4.0, and 802.11b/g/n (no 5 GHz support). There’s a micro-HDMI and micro-USB port for external connectivity (a separate cable converts the micro USB port into a full-size one), along with an SD card slot. The tablet has a speaker adequate for notification sounds but little more.

As a result, performance and battery life are similar to what we’ve seen before. The Iconia W3 comes equipped with full-blown Windows 8, unlike ARM tablets, so it can run any 32-bit Windows application—should you really want to. Clover Trail’s GPU performance is such that games and other graphics-intensive programs won’t run well, however.

Eight inches of horror

The new bits on this tablet are really the screen and the size.

Screens are important. We spend essentially all our time interacting with devices looking at screens. Cost-cutting on screens is unforgivable, as a bad screen will damage every single interaction you have with the device. This goes doubly so for tablets, where the screen works not only as an output device but also as the primary input device.

The Acer Iconia W3’s screen is a standout—because it is worst-in-class. I hated every moment I used the Iconia W3, and I hated it because I hated the screen. Its color accuracy and viewing angles are both miserable (whites aren’t white—they’re weirdly colorful and speckled). The screen has a peculiar grainy appearance that makes it look permanently greasy. You can polish as much as you like; it will never go away. The whole effect is reminiscent in some ways of old resistive screens.

It’s hard to overstate just how poor this screen is. At any reasonable tablet viewing distance, the color of the screen is uneven. The viewing angle is so narrow that at typical hand-held distances, the colors change across the width of the screen. At full arm’s length the screen does finally look even, but the device is obviously unusable that way.

Acer has clearly skimped on the screen. I’m sure the panel in the W3 was quite cheap, and that may be somewhat reflected in the unit’s retail price ($379 for a 32GB unit, $429 for this 64GB one—putting it at the same price as the 32GB iPad mini, which has a comparable amount of available disk space), but who cares? It doesn’t matter how cheap something is if you don’t want to use it at all.

This poor screen quality isn’t a question of resolution, either. 1280×800 is not a tremendously high resolution, but text looks crisp enough. At 186 pixels per inch, 1280×800 feels more or less OK for this size of device.

The low resolution does, however, have one significant drawback: it disables Windows 8’s side-by-side Metro multitasking, which requires a resolution of at least 1366×768. The W3’s screen is 86 pixels too narrow, so the Metro environment is strictly one application at a time.

This is an unfortunate decision. The side-by-side multitasking is one of the Metro environment’s most compelling features. Keeping Twitter or Messenger snapped to the side makes a lot of sense and works well. I’ve never used Windows 8 on a device that didn’t support side-by-side Metro multitasking before, and I don’t ever want to again.

Size-wise, the W3 may be small for a Windows tablet, but it’s not exactly small. It’s fat. The W3 is 11.4 mm thick. The iPad mini, in comparison, is 7.2 mm thick. The Iconia W3 is also heavy at 500 g; the iPad mini, in comparison, is 308 g. That makes the W3 more than 50 percent thicker and more than 50 percent heavier.

The thickness makes the lack of a full-sized USB port on the device more than a little confusing. There’s certainly room for a full USB port, and a full port would be more convenient than the dongle. But for whatever reason, Acer didn’t give us one.

The device itself feels solid enough, albeit plasticky. It doesn’t exude quality, but it’s a step or two up from the bargain basement.

Keyboard non-dock

The W3 also has a keyboard accessory. As is common for this kind of thing, the keyboard has no electrical connection to the tablet. It’s a Bluetooth keyboard powered by a pair of AAA batteries. It has a groove along the top that can hold the tablet in both landscape and portrait orientations and a clip on the back that lets you use the keyboard as a kind of screen protector.

The keyboard has to be manually paired to the tablet. It’s more or less full-size, with a reasonable key layout. It’s a typical mediocre keyboard. The feel is a little on the squishy side, lacking the crispness of, for example Microsoft’s Type Cover for its Surface tablets. It’s better than any on-screen keyboard, and to that extent it does its job. But it’s a long way from being an actually good keyboard.

The groove does hold the tablet up, and on a level surface the unit doesn’t topple over, but it’s not as satisfactory as some of the hinged keyboard/docks we’ve seen on other devices. Tilt the base while carrying it or using it on your lap and the tablet is liable to fall out.

Linksys X3500 Provide Two Line Internet

Home computer network equipment manufacturers Linksys launches Wi-Fi modem router X3500 series in Jakarta, Wednesday (17/07/2013). This product is equipped with dual band capability, at a frequency of 2.4 GHz and 5GHz, which can be run simultaneously at speeds up to 450Mbps + 300Mbps.

Networking Linksys Indonesia Sales Manager, Kevin Kurniawan said, X3500 products provide two interface internet connection, ie ADSL and WAN.

ADSL ports by default has been available in the X3500, so the ADSL internet cable from the provider, such as Telkom Speedy, can be directly used. As for cable modem subscribers, such as First Media, or who already have ADSL modem, can use the WAN port on the X3500.

It also provides a modem router 4 LAN ports and a USB port that can be used to share content with an external hard disk or connect to the printer. With this, the X3500 can connect the computer to the mobile device, tablet, television, and so on.

“If the user has the flash drive already contains a song or video, then plugged into the router, users can stream from a smartphone or tablet via the DLNA network,” said Kevin.

Linksys offers Cisco Connect Express mobile app for iOS and Android devices, which can be used for remote management, monitoring, and firmware upgrades.

X3500 modem router, which target the middle to upper market segment, has begun to be marketed in Indonesia at a price of Rp 1.7 million.

Although Belkin Linksys was acquired in March 2013 by then, but still retain Linksys Cisco logo at the top of the X3500. According to Kevin, Linksys Networking Indonesia still protect products with warranty and after sales service.

HP EliteBook Revolve 810, “Tablet-Laptop” for Businessman

Hewlett-Packard (HP) launched a convertible, a device that combines the concept of tablet and notebook in one package, called the HP EliteBook latest Revolve 810, Wednesday (24/07/2013).

Different from most of the convertibles that are circulating in the market these days, HP is targeting sales of products for businesses.

According to Cynthia Defjan, MDM Business Notebook HP Indonesia, EliteBook Revolve 810 comes as a device for business people who are armed with a variety of features that can not be found in consumer grade devices.

For example, a joint product between tablet and notebook is equipped with a safety feature called HP Client Security. Using these features, users can protect the devices at every layer, including hardware, software, and BIOS.

In addition, security is also installed Microsoft Defender or Microsoft Security Essentials and also the certified TPM security chip for data encryption.

“This device is targeted to enterprise-class. We make a difference in terms of manageability and security,” said Cynthia in Jakarta.

Another added value, HP also designed the device for resilient or resistant to impact. One way is to use artificial Corning Gorilla Glass screen. By using this screen, the device anti-scratch and impact.

Together with the convertible devices in general, the display of the EliteBook Revolve 810 can be rotated up to 360 degrees. To go into tablet mode, the screen rotated and folded enough.

“This is a business tablet that can be converted into a device with notebook performance. It is a tablet that comes with a keyboard,” said Defjan.

Because these devices into the enterprise, there is no standard specification defined by HP. Those who are interested can modify or order in accordance with the wishes of each.

The screen spans 11.6 inches with a brightness level of 400 nits. Available processors ranging from Intel Core Sandy Bridge generation of three to four generations of Haswell.

For the storage media, this product has up to 256 GB SSD option. He is also equipped with a camera, backlit keyboard, and NFC chip.

Operating system supplied is Windows 8. However, for companies that are not yet ready to switch to the operating system, HP provides the operating system Windows 7.

HP EliteBook Revolve 810 already ordered directly through HP. Cheapest price of this device is approximately USD 17 million.

Windows 8.1 Enterprise Preview Reflects the Growing Trend of Working Remotely

Microsoft unleashed Windows 8.1 Enterprise Preview today. The early look at the enterprise version of Windows 8.1 follows the release of Windows 8.1 Preview at Microsoft’s BUILD conference last month, and includes a variety of tools that show Microsoft’s commitment to both BYOD and virtualization.

Aside from the slew of changes and enhancements in the regular Windows 8.1 Preview edition, Windows 8.1 Enterprise Preview also includes features uniquely designed for business customers. Windows 8.1 Enterprise Preview adds business-friendly elements like Direct Acess, and BranchCache. It also provides IT admins with the power to configure and lock down the Start screen on Windows 8 clients.

Microsoft also has tools in Windows 8.1 Enterprise Preview to help out with BYOD and virtualization: Windows To Go, and Virtual Desktop Infrastructure (VDI). Windows To Go lets the company put an entire managed Windows 8 desktop environment on a bootable USB thumb drive, and VDI gives the business the tools to enable users to use critical business software from virtually any Internet-connected device.

One of the hottest trends in business technology today is mobility and working remotely. The driving forces behind working remotely are the “bring your own device” (BYOD) trend and virtualization.

More and more companies are embracing BYOD and allowing (or requiring) employees to provide their own PCs and mobile devices. BYOD can be a cost-cutting measure for the company, because the employee is taking on some (or all) of the burden of purchasing the PC. BYOD enables users to be more productive and have higher job satisfaction because they get to use the hardware they prefer, and are more comfortable with.

BYOD also introduces some unique concerns, though, when it comes to enforcing policies and protecting company data. Regardless of its benefits, companies can’t just let employees connect rogue computers to the network, or store sensitive company data on a personal PC without any protection. The nice thing about Windows To Go is that it turns any Windows 7 or Windows 8 device into a managed Windows 8 PC without installing any additional software, or putting the personal applications or data of the employee at risk.

Another factor in working remotely is virtualization. Whether hosted locally or in the cloud,virtual servers allow the company to maximize the value from its investment in hardware, and adapt quickly to changing demand or business needs. From an endpoint perspective, virtual applications, or virtual desktop are more valuable. A virtual desktop infrastructure like in Windows 8.1 Enterprise simplifies deployment and management of software because the company only has to install and maintain it in one place. At the same time, it helps the users get more done even on older or weaker hardware because much of the processing overhead is handled on the server end.

Small and medium businesses have a lot to gain from both BYOD and virtualization. The features and capabilities of Windows 8.1 Enterprise Preview demonstrate Microsoft’s commitment to keeping SMB customers on the cutting edge.

How Smart Dust Could Be Used To Monitor Human Thought

A few years ago a team of researchers from Brown University made headlines after they successfully demonstrated how a paralyzed woman who had lost the use of her arms and legs could control a robotic arm using her brainwaves. In a video, Cathy Hutchinson imagines drinking a cup of coffee, and the robotic arm brings the cup to her lips.

The scene is amazing, but also a little disturbing. Hutchinson is connected to the robotic arm through a rod-like “pedestal” driven into her skull. At one end of the pedestal, a bundle of gold wires is attached to a tiny array of microelectrodes that is implanted in the primary motor cortex of Hutchison’s brain. This sensor, which is about the size of a baby aspirin, records her neural activity. At the other end of the pedestal is an external cable that transmits neural data to a nearby computer, which translates the signals into code that guides the robotic arm.

This method, known as BrainGate, pretty much defined state-of-the-art brain-computer interfaces at the end of the last decade. If the idea of a rod-through-the-head computer interface makes you cringe, you are not alone.

For some time, a small team of researchers at UC Berkeley has been working on plans for a less invasive, wireless monitoring system. Earlier this month, they released a draft paper: “Neural Dust: An Ultrasonic, Low Power Solution for Chronic Brain-Machine Interfaces.”

Dongjin Seo, a graduate student in UC Berkeley’s electrical engineering and computer science department, authored the paper under the supervision of senior faculty members, including Michel Maharbiz who has famously created cyborg beetles for the US Defense Department.

Seo said the researchers’ goal is to build an implantable system that is ultra-miniature, extremely compliant, and scalable to be viable for a lifetime, for brain-machine interfaces. “With neural dust, due to its extreme scalability, this framework can be applied for Obama’s BRAIN initiative, which necessitates large-scale, parallel, and real-time monitoring of neurons,” Seo explained.

The Berkeley researchers propose to sprinkle the brain with tiny, dust-sized, wireless sensors. This would reduce the risk of infection from wiring up scores of sensors placed throughout the brain and limit the trauma to one initial operation. During that operation, the skull would be opened, and sensors would be inserted into the brain. At the same time a separate transceiver would be placed directly under the skull but above the brain. The transceiver would communicate with the sensors via ultrasound.

Another battery-powered transceiver outside the skull would receive data transmissions from the chip inside the skull and supply wireless power to it.  As the paper notes, this type of power transfer is already used in a variety of medical applications, including cochlear implants. Seo said the amount of power being proposed is within FDA and IEEE guidelines.

The idea of neural dust immediately sparked the imagination of futurists after the paper was published on arXiv.org on July 8. “The brilliance of this system is that it could potentially allow scientists to see what’s going on with thousands, tens of thousands, or even hundreds of thousands of neurons inside the brain at once,” wrote Ramez Naam, a senior associate at the Foresight Institute and author of “More Than Human: Embracing the promise of biological enhancement.”

But would neural dust have practical use for the growing industry of mind-controlled computer games and brain training apps? Jon Cowan, founder ofNeuroTek, is dubious. NeuroTek’s Peak Achievement Training has been used at the U.S. Olympic Training Center in Colorado Springs, as well as at other Olympic centers from China to Norway.

“[Neural dust] doesn’t have much practical promise because of the surgery it would require,” Cowan said. “I don’t think they’ll find too many people that would volunteer for it.” Cowan noted existing ways for measuring brainwaves that rely on external sensors may be crude, but they’re effective enough for today’s applications.

“We really believe this is a practical system and, more importantly, we think this is potentially a powerful framework for achieving what Obama has announced,” Seo said. Still, he pointed out that the paper is a draft. “It’s a concept paper,” he said. “It’s a theoretical study of what we think is possible in the realm of neural recording.”

By publishing the paper on arXiv.org, an online collection of preprints of scientific work, the team is hoping to spur involvement and feedback from scientists in different fields. Lots of challenges remain to be overcome before neural dust will be ready for live testing.

Latest collection of Logitech, Panda Candy & Floral Foray

JAKARTA – After successfully bring wireless mouse with a pattern Pink Splash Black Topography, current Logitech again presents its newest product from the ranks of the Logitech Global Graffiti Collection Logitech Wireless Mouse M235 Limited Edition with Candy Panda and Floral motifs Foray.

“Our latest collection is launched to appreciate while meeting the desire of the public to a wireless mouse that is not only reliable and convenient to use, but also has a creative shape with a unique motif,” said Sutanto Kurniadih Indonesia as Country Manager of Logitech.

The cute design that carried this Logitech is a result of its partnership with the leading designers from around the world. Where, together they create a variety of unique styles and motifs that reflect creativity and personal style of its users.

Logitech Wireless Mouse M235 Limited Edition is equipped with Logitech Advanced Optical Tracking that can work in almost any kind of surface. Not only that, the mouse is also equipped with wireless connectivity Logitech Advanced 2.4 GHz which gives you freedom faster data transmission without pauses or the connection is lost.

According to a news release received Okezone, Tuesday (02/07/2013), this wireless mouse is accompanied with a rubberized grip and scroll whell to scroll through. To be able to use the mouse motif Candy Panda and Floral Foray, you simply connect it to a USB port only. Not only that, the mouse is also equipped with On / Off button and sleep mode to conserve battery power.

The Evolution of Direct3D

* UPDATE: Be sure to read the comment thread at the end of this blog, the discussion got interesting.

It’s been many years since I worked on Direct3D and over the years the technology has evolved Dramatically. Modern GPU hardware has changed tremendously over the years Achieving processing power and capabilities way beyond anything I dreamed of having access to in my lifetime. The evolution of the modern GPU is the result of many fascinating market forces but the one I know best and find most interesting was the influence that Direct3D had on the new generation GPU’s that support Welcome to Thunderbird processing cores, billions of transistors more than the host CPU and are many times faster at most applications. I’ve told a lot of funny stories about how political and Direct3D was created but I would like to document some of the history of how the Direct3D architecture came about and the architecture that had profound influence on modern consumer GPU’s.

Published here with this article is the original documentation for Direct3D DirectX 2 when it was first Introduced in 1995. Contained in this document is an architecture vision for 3D hardware acceleration that was largely responsible for shaping the modern GPU into the incredibly powerful, increasingly ubiquitous consumer general purpose supercomputers we see today.

D3DOVER
The reason I got into computer graphics was NOT an interest in gaming, it was an interest in computational simulation of physics. I Studied 3D at Siggraph conferences in the late 1980’s Because I wanted to understand how to approach simulating quantum mechanics, chemistry and biological systems computationally. Simulating light interactions with materials was all the rage at Siggraph back then so I learned 3D. Understanding light 3D mathematics and physics made me a graphics and color expert roomates got me a career in the publishing industry early on creating PostScript RIP’s (Raster Image Processors). I worked with a team of engineers in Cambridge England creating software solutions for printing color graphics screened before the invention of continuous tone printing. That expertise got me recruited by Microsoft in the early 1990’s to re-design the Windows 95 and Windows NT print architecture to be more competitive with Apple’s superior capabilities at that time. My career came full circle back to 3D when, an initiative I started with a few friends to re-design the Windows graphics and media architecture (DirectX) to support real-time gaming and video applications, resulted in gaming becoming hugely strategic to Microsoft. Sony Introduced in a consumer 3D game console (the Playstation 1) and being responsible for DirectX it was incumbent on us to find a 3D solution for Windows as well.

For me, the challenge in formulating a strategy for consumer 3D gaming for Microsoft was an economic one. What approach to consumer 3D Microsoft should take to create a vibrant competitive market for consumer 3D hardware that was both affordable to consumers AND future proof? The complexity of realistically simulating 3D graphics in real time was so far beyond our capabilities in that era that there was NO hope of choosing a solution that was anything short of an ugly hack that would produce “good enough” for 3D games while being very far removed from the ideal solutions mathematically we had implemented a little hope of seeing in the real-world during our careers.

Up until that point only commercial solutions for 3D hardware were for CAD (Computer Aided Design) applications. These solutions worked fine for people who could afford hundred thousand dollars work stations. Although the OpenGL API was the only “standard” for 3D API’s that the market had, it had not been designed with video game applications in mind. For example, texture mapping, an essential technique for producing realistic graphics was not a priority for CAD models roomates needed to be functional, not look cool. Rich dynamic lighting was also important to games but not as important to CAD applications. High precision was far more important to CAD applications than gaming. Most importantly OpenGL was not designed for highly interactive real-time graphics that used off-screen video page buffering to avoid tearing artifacts during rendering. It was not that the OpenGL API could not be adapted to handle these features for gaming, simply that it’s actual market implementation on expensive workstations did not suggest any elegant path to a $ 200 consumer gaming cards.

TRPS15In the early 1990’s computer RAM was very expensive, as such, early 3D consumer hardware designs optimized for minimal RAM requirements. The Sony Playstation 1 optimized for this problem by using a 3D hardware solution that did not rely on a memory intensive the data structure called a Z-buffer, instead they used a polygon level sorting algorithm that produced ugly intersections between moving joints. The “Painters Algorithm” approach to 3D was very fast and required little RAM. It was an ugly but pragmatic approach for gaming that would have been utterly unacceptable for CAD applications.

In formulating the architecture for Direct3D we were faced with difficult choices Similar enumerable. We wanted the Windows graphics leading vendors of the time; ATI, Cirrus, Trident, S3, Matrox and many others to be Able to Compete with one another for rapid innovation in 3D hardware market without creating utter chaos. The technical solution that Microsoft’s OpenGL team espoused via Michael Abrash was a driver called 3DDDI models (3D Device Driver Interface). 3DDDI was a very simple model of a flat driver that just supported the hardware acceleration of 3D rasterization. The complex mathematics associated with transforming and lighting a 3D scene were left to the CPU. 3DDDI used “capability bits” to specify additional hardware rendering features (like filtering) that consumer graphics card makers could optionally implement. The problem with 3DDDI was that it invited problems for game developers out of the gate. There were so many cap-bits every game that would either have to support an innumerable number of feature combinations unspecified hardware to take advantage of every possible way that hardware vendors might choose to design their chips producing an untestable number of possible hardware configurations and a consumer huge amount of redundant art assets that the games would not have to lug around to look good on any given device OR games would revert to using a simple set of common 3D features supported by everyone and there would be NO competitive advantage for companies to support new hardware 3D capabilities that did not have instant market penetration. The OpenGL crowd at Microsoft did not see this as a big problem in their world Because everyone just bought a $ 100,000 workstation that supported everything they needed.

The realization that we could not get what we needed from the OpenGL team was one of the primary could be better we Decided to create a NEW 3D API just for gaming. It had nothing to do with the API, but with the driver architecture underneath Because we needed to create a competitive market that did not result in chaos. In this respect the Direct3D API was not an alternative to the OpenGL API, it was a driver API designed for the sole economic purpose of creating a competitive market for 3D consumer hardware. In other words, the Direct3D API was not shaped by “technical” requirements so much as economic ones. In this respect the Direct3D API was revolutionary in several interesting ways that had nothing to do with the API itself but rather the driver architecture it would rely on.

When we Decided to acquire a 3D team to build with Direct3D I was chartered surveying the market for candidate companies with the right expertise to help us build the API we needed. As I have previously recounted we looked at Epic Games (creators of the Unreal engine), Criterion (later acquired by EA), Argonaut and finally Rendermorphics. We chose Rendermorphics (based in London) Because of the large number of 3D quality engineers and the company employed Because The founder, Servan Kiondijian, had a very clear vision of how consumer 3D drivers should be designed for maximum future compatibility and innovation. The first implementation of the Direct3D API was rudimentary but quickly intervening evolved towards something with much greater future potential.

D3DOVER lhanded
Whoops!

My principal memory from that period was a meeting in roomates I, as the resident expert on the DirectX 3D team, was asked to choose a handedness for the Direct3D API. I chose a left handed coordinate system, in part out of personal preference. I remember it now Only because it was an arbitrary choice that by the caused no end of grief for years afterwards as all other graphics authoring tools Adopted the right handed coordinate system to the OpenGL standard. At the time nobody knew or believed that a CAD tool like Autodesk would evolve up to become the standard tool for authoring game graphics. Microsoft had acquired Softimage with the intention of displacing the Autodesk and Maya anyway. Whoops …

The early Direct3D HAL (Hardware Abstraction Layer) was designed in an interesting way. It was structured vertically into three stages.

DX 2 HAL

The highest was the most abstract layer transformation layer, the middle layer was dedicated to lighting calculations and the bottom layer was for rasterization of the finally transformed and lit polygons into depth sorted pixels. The idea behind this vertical structure driver was to provide a relatively rigid feature path for hardware vendors to innovate along. They could differentiate their products from one another by designing hardware that accelerated increasingly higher layers of the 3D pipeline resulting in greater performance and realism without incompatibilities or a sprawling matrix of configurations for games to test against art or requiring redundant assets. Since the Direct3D API created by Rendermorphics Provided a “pretty fast” implementation software for any functionality not accelerated by the hardware, game developers could focus on the Direct3D API without worrying about myriad permutations of incompatible hardware 3D capabilities. At least that was the theory. Unfortunately like the 3DDDI driver specification, Direct3D still included capability bits designed to enable hardware features that were not part of the vertical acceleration path. Although I actively objected to the tendency of Direct3D capability to accumulate bits, the team felt extraordinary competitive pressure from Microsoft’s own OpenGL group and from the hardware vendors to support them.

The hardware companies, seeking a competitive advantage for their own products, would threaten to support and promote OpenGL to game developers Because The OpenGL driver bits capability supported models that enabled them to create features for their hardware that nobody else supported. It was common (and still is) for the hardware OEM’s to pay game developers to adopt features of their hardware unique to their products but incompatible with the installed base of gaming hardware, forcing consumers to constantly upgrade their graphics card to play the latest PC games . Game developers alternately hated capability bits Because of their complexity and incompatibilities but wanted to take the marketing dollars from the hardware OEM’s to support “non-standard” 3D features.

Overall I viewed this dynamic as destructive to a healthy PC gaming economy and advocated resisting the trend OpenGL Regardless of what the people wanted or OEM’s. I believed that creating a consistent stable consumer market for PC games was more important than appeasing the hardware OEM’s. As such as I was a strong advocate of the relatively rigid vertical Direct3D pipeline and a proponent of introducing only API features that we expected up to become universal over time. I freely confess that this view implied significant constraints on innovation in other areas and a placed a high burden of market prescience on the Direct3D team.

The result, in my estimation, was pretty good. The Direct3D fixed function pipeline, as it was known, produced a very rich and growing PC gaming market with many healthy competitors through to DirectX 7.0 and the early 2000’s. The PC gaming market boomed and grew to be the largest gaming market on Earth. It also resulted in a very interesting change in the GPU hardware architecture over time.

Had the Direct3D HAL has been a flat driver with just the model for rasterization capability bits as the OpenGL team at Microsoft had advocated, 3D hardware makers would have competed by accelerating just the bottom layer of the 3D rendering pipeline and adding differentiating features to their hardware capability via bits that were incompatible with their competitors. The result of introducing the vertical layered architecture THING that was 3D hardware vendors were all encouraged to add features to their GPU’s more consistent with the general purpose CPU architectures, namely very fast floating point operations, in a consistent way. Thus consumer GPU’s evolved over the years to increasingly resemble general purpose CPU’s … with one major difference. Because the 3D fixed function pipeline was rigid, the Direct3D architecture afforded very little opportunity for code branching frequent as CPU’s are designed to optimize for. Achieved their GPU’s amazing performance and parallelism in part by being free to assume that little or no branching code would ever occur inside a Direct3D graphics pipeline. Thus instead of evolving one giant monolithic core CPU that has massive numbers of transistors dedicated to efficient branch prediction has as an Intel CPU, GPU has a Direct3D Hundreds to Welcome to Thunderbird simple CPU cores like that have no branch prediction. They can chew through a calculation at incredible speed confident in the knowledge that they will not be interrupted by code branching or random memory accesses to slow them down.

DirectX 7.0 up through the underlying parallelism of the GPU was hidden from the game. As far as the game was concerned some hardware was just faster than other hardware but the game should not have to worry about how or why. The early DirectX fixed function pipeline architecture had done a brilliant job of enabling dozens of Disparate competing hardware vendors to all take different approaches to Achieving superior cost and performance in consumer 3D without making a total mess of the PC gaming market for the game developers and consumers . It was not pretty and was not entirely executed with flawless precision but it worked well enough to create an extremely vibrant PC gaming market through to the early 2000’s.

Before I move on to discussing more modern evolution Direct3D, I would like to highlight a few other important ideas that influenced architecture in early modern Direct3D GPU’s. Recalling that in the early to mid 1990’s was relatively expensive RAM there was a lot of emphasis on consumer 3D techniques that conserved on RAM usage. The Talisman architecture roomates I have told many (well-deserved) derogatory stories about was highly influenced by this observation.

Talsiman
Search this blog for tags “Talisman” and “OpenGL” for many stories about the internal political battles over these technologies within Microsoft

Talisman relied on a grab bag of graphics “tricks” to minimize GPU RAM usage that were not very generalized. The Direct3D team, Rendermorphics Heavily influenced by the founders had made a difficult choice in philosophical approach to creating a mass market for consumer 3D graphics. We had Decided to go with a more general purpose Simpler approach to 3D that relied on a very memory intensive a data structure called a Z-buffer to Achieve great looking results. Rendermorphics had managed to Achieve very good 3D performance in pure software with a software Z-buffer in the engine Rendermorphics roomates had given us the confidence to take the bet to go with a more general purpose 3D Simpler API and driver models and trust that the hardware RAM market and prices would eventually catch up. Note however that at the time we were designing Direct3D that we did not know about the Microsoft Research Groups “secret” Talisman project, nor did they expect that a small group of evangelists would cook up a new 3D API standard for gaming and launch it before their own wacky initiative could be deployed. In short one of the big bets that Direct3D made was that the simplicity and elegance of Z-buffers to game development were worth the risk that consumer 3D hardware would struggle to affordably support them early on.

Despite the big bet on Z-buffer support we were intimately aware of two major limitations of the consumer PC architecture that needed to be addressed. The first was that the PC bus was generally very slow and second it was much slower to copy the data from a graphics card than it was to copy the data to a graphics card. What that generally meant was that our API design had to growing niche to send the data in the largest most compact packages possible up to the GPU for processing and absolutely minimize any need to copy the data back from the GPU for further processing on the CPU. This generally meant that the Direct3D API was optimized to package the data up and send it on a one-way trip. This was of course an unfortunate constraint Because there were many brilliant 3D effects that could be best accomplished by mixing the CPU’s branch prediction efficient and robust floating point support with the GPU’s parallel rendering incredible performance.

One of the fascinating Consequences of that constraint was that it forced the GPU’s up to become even more general purpose to compensate for the inability to share the data with the CPU efficiently. This was possibly the opposite of what Intel intended to happen with its limited bus performance, Because Intel was threatened by the idea that the auxiliary would offload more processing cards from their work thereby reducing the CPU’s Intel CPU’s value and central role to PC computing. It was reasonably believed at that time that Intel Deliberately dragged their feet on improving PC performance to deterministic bus a market for alternatives to their CPU’s for consumer media processing applications. Earlier Blogs from my recall that the main REASON for creating DirectX was to Prevent Intel from trying to virtualize all the Windows Media support on the CPU. Intel had Adopted a PC bus architecture that enabled extremely fast access to system RAM shared by auxiliary devices, it is less Likely GPU’s that would have evolved the relatively rich set of branching and floating point operations they support today.

To Overcome the fairly stringent performance limitations of the PC bus a great deal of thought was put into techniques for compressing and streamlining DirectX assets being sent to the GPU performance to minimize bus bandwidth limitations and the need for round trips from the GPU back to the CPU . The early need for the rigid 3D pipeline had Consequences interesting later on when we Began to explore assets streaming 3D over the Internet via modems.

We Recognized early on that support for compressed texture maps would Dramatically improve bus performance and reduce the amount of onboard RAM consumer GPU’s needed, the problem was that no standards Existed for 3D texture formats at the time and knowing how fast image compression technologies were evolving at the time I was loathe to impose a Microsoft specified one “prematurely” on the industry. To Overcome this problem we came up with the idea of ​​”blind compression formats”. The idea, roomates I believe was captured in one of the many DirectX patents that we filed, had the idea that a GPU could encode and decode image textures in an unspecified format but that the DirectX API’s would allow the application to read and write from them as though they were always raw bitmaps. The Direct3D driver would encode and decode the image data is as Necessary under the hood without the application needing to know about how it was actually being encoded on the hardware.

By 1998 3D chip makers had begun to devise good quality 3D texture formats by DirectX 6.0 such that we were Able to license one of them (from S3) for inclusion with Direct3D.

http://www.microsoft.com/en-us/news/press/1998/mar98/s3pr.aspx

DirectX 6.0 was actually the first version of DirectX that was included in a consumer OS release (Windows 98). Until that time, DirectX was actually just a family of libraries that were shipped by the Windows games that used them. DirectX was not actually a Windows API until five generations after its first release.

DirectX 7.0 was the last generation of DirectX that relied on the fixed function pipeline we had laid out in DirectX 2.0 with the first introduction of the Direct3D API. This was a very interesting transition period for Direct3D for several could be better;

1) The original founders DirectX team had all moved on,

2) Microsoft’s internal Talisman and could be better for supporting OpenGL had all passed

3) Microsoft had brought the game industry veterans like Seamus Blackley, Kevin Bacchus, Stuart Moulder and others into the company in senior roles.

4) Become a Gaming had a strategic focus for the company

DirectX 8.0 marked a fascinating transition for Direct3D Because with the death of Talisman and the loss of strategic interest in OpenGL 3D support many of the people from these groups came to work on Direct3D. Talisman, OpenGL and game industry veterans all came together to work on Direct3D 8.0. The result was very interesting. Looking back I freely concede that I would not have made the same set of choices that this group made for DirectX 8.0 in chi but it seems to me that everything worked out for the best anyway.

Direct3D 8.0 was influenced in several interesting ways by the market forces of the late 20th century. Microsoft largely unified against OpenGL and found itself competing with the Kronos Group standards committee to advance faster than OpenGL Direct3D. With the death of SGI, control of the OpenGL standard fell into the hands of the 3D hardware OEM’s who of course wanted to use the standard to enable them to create differentiating hardware features from their competitors and to force Microsoft to support 3D features they wanted to promote. The result was the Direct3D and OpenGL Became much more complex and they tended to converge during this period. There was a stagnation in 3D feature adoption by game developers from DirectX 8.0 to DirectX 11.0 through as a result of these changes. Became creating game engines so complex that the market also converged around a few leading search providers Including Epic’s Unreal Engine and the Quake engine from id software.

Had I been working on Direct3D at the time I would have stridently resisted letting the 3D chip lead Microsoft OEM’s around by the nose chasing OpenGL features instead of focusing on enabling game developers and a consistent quality consumer experience. I would have opposed introducing shader support in favor of trying to keep the Direct3D driver layer as vertically integrated as possible to Ensure conformity among hardware vendors feature. I also would have strongly opposed abandoning DirectDraw support as was done in Direct3D 8.0. The 3D guys got out of control and Decided that nobody should need pure 2D API’s once developers Adopted 3D, failing to recognize that simple 2D API’s enabled a tremendous range of features and ease of programming that the majority of developers who were not 3D geniuses could Easily understand and use. Forcing the market to learn 3D Dramatically constrained the set of people with the expertise to adopt it. Microsoft later discovered the error in this decision and re-Introduced DirectDraw as the Direct2D API. Basically letting the Direct3D 8.0 3D design geniuses made it brilliant, powerful and useless to average developers.

At the time that the DirectX 8.0 was being made I was starting my first company WildTangent Inc.. and Ceased to be closely INVOLVED with what was going on with DirectX features, however years later I was Able to get back to my roots and 3D took the time to learn Direct3D programming in DirectX 11.1. Looking back it’s interesting to see how the major architectural changes that were made in DirectX 8 resulted in the massively convoluted and nearly incomprehensible Direct3D API we see today. Remember the 3 stage pipeline DirectX 2 that separated Transformation, lighting and rendering pipeline into three basic stages? Here is a diagram of the modern DirectX 11.1 3D pipeline.

DX 11 Pipeline

Yes, it grew to 9 stages and 13 stages when arguably some of the optional sub-stages, like the compute shader, are included. Speaking as somebody with an extremely lengthy background in very low-level 3D graphics programming and I’m Embarrassed to confess that I struggled mightily to learn programming Direct3D 11.1. Become The API had very nearly incomprehensible and unlearnable. I have no idea how somebody without my extensive background in 3D and graphics could ever begin to learn how to program a modern 3D pipeline. As amazingly powerful and featureful as this pipeline is, it is also damn near unusable by any but a handful of the most antiquated brightest minds in 3D graphics. In the course of catching up on my Direct3D I found myself simultaneously in awe of the astounding power of modern GPU’s and where they were going and in shocked disgust at the absolute mess the 3D pipeline had Become. It was as though the Direct3D API had Become a dumping ground for 3D features that every OEM DEMANDED had over the years.

Had I not enjoyed the benefit of the decade long break from Direct3D involvement Undoubtedly I would have a long history of bitter blogs written about what a mess my predecessors had made of a great and elegant vision for the consumer 3D graphics. Weirdly, however, leaping forward in time to the present day, I am forced to admit that I’m not sure it was such a bad thing after all. The result of stagnation gaming on the PC as a result of the mess Microsoft and the OEMs made of the Direct3D API was a successful XBOX. Having a massively fragmented 3D API is not such a problem if there is only one hardware configuration to support game developers have, as is the case with a game console. Direct3D shader 8.0 support with early primitive was the basis for the first Xbox’s graphics API. For the first selected Microsoft’s XBOX NVIDIA NVIDIA chip giving a huge advantage in the 3D PC chip market. DirectX 9.0, with more advanced shader support, was the basis for the XBOX 360, Microsoft roomates selected for ATI to provide the 3D chip, AMD this time handing a huge advantage in the PC graphics market. In a sense the OEM’s had screwed Themselves. By successfully Influencing Microsoft and the OpenGL standards groups to adopt highly convoluted graphics pipelines to support all of their feature sets, they had forced Themselves to generalize their GPU architectures and the 3D chip market consolidated around a 3D chip architecture … whatever Microsoft selected for its consoles.

The net result was that the retail PC game market largely died. It was simply too costly, too insecure and too unstable a platform for publishing high production value games on any longer, with the partial exception of MMOG’s. Microsoft and the OEM’s had conspired together to kill the proverbial golden goose. No biggie for Microsoft as they were happy to gain complete control of the former PC gaming business by virtue of controlling the XBOX.

From the standpoint of the early DirectX vision, I would have said that this outcome was a foolish, shortsighted disaster. Microsoft had maintained a little discipline and strategic focus on the Direct3D API they could have ensured that there were NO other consoles in existence in a single generation by using the XBOX XBOX to Strengthen the PC gaming market rather than inadvertently destroying it. While Microsoft congratulates itself for the first successful U.S. launch of the console, I would count all the gaming dollars collected by Sony, Nintendo and mobile gaming platforms over the years that might have remained on Microsoft platforms controlled Microsoft had maintained a cohesive strategy across media platforms. I say all of this from a past tense perspective Because, today, I’m not so sure that I’m really all that unhappy with the result.

The new generation of consoles from Sony AND Microsoft have Reverted to a PC architecture! The next generation GPU’s are massively parallel, general-purpose processors with intimate access to the shared memory with the CPU. In fact, the GPU architecture Became so generalized that a new pipeline stage was added in DirectX 11 DirectCompute called that simply allowed the CPU to bypass the entire convoluted Direct3D graphics pipeline in favor of programming the GPU directly. With the introduction of DirectCompute the promise of simple 3D programming returned in an unexpected form. Modern GPU’s have Become so powerful and flexible that the possibility of writing cross 3D GPU engines directly for the GPU without making any use of the traditional 3D pipeline is an increasingly practical and appealing programming option. From my perspective here in the present day, I would anticipate that within a few short generations the need for the traditional Direct3D and OpenGL APIs will vanish in favor of new game engines with much richer and more diverse feature sets that are written entirely in device independent shader languages ​​like Nvidia’s CUDA and Microsoft’s AMP API’s.

Today, as a 3D physics engine and developer I have never been so excited about GPU programming Because of the sheer power and relative ease of programming directly to the modern GPU without needing to master the enormously convoluted 3D pipelines associated with Direct3D and OpenGL API’s. If I were responsible for Direct3D strategy today I would be advocating dumping the investment in traditional 3D pipeline in favor of Rapidly opening direct access to a rich GPU programming environment. I personally never imagined that my early work on Direct3D, would, within a couple decades, Contribute to the evolution of a new kind of ubiquitous processor that enabled the kind of incredibly realistic and general modeling of light and physics that I had learned in the 1980 ‘s but never believed I would see computers powerful enough to models in real-time during my active career.

Application Encryption VSEncryptor

File Protector Portable Applications

VSEncryptor is an application that can be petrified our encryption protects files and text by scrambling the contents and form of the original will only display such content if the correct password is entered.

Portable VSEncryptorVersi this application is free. However, although it does not require installation, by default it has several options to change the entries in the registry. If you choose to install this application, note that it will replace the search engine and homepage in Internet Explorer and Mozilla Firefox. Unless you choose a custom installation, you can prevent these changes on your browser.

Although the application user interface is so simple and less attractive, but its function is quite good. In the main window there is a list of “encryption algorithms” which is quite interesting. You can select AES (128/192/256-bit) RC2/4/5/6, DES and Triple DES, Blowfish, Twofish, Serpent, Camellia, Skipjack, CAST-256, MARS, IDEA, SEED, GOST, XTEA, and SHACAL-2.

VSEncryptor can use these algorithms to randomize the plain text and other types of files. As soon as you press the encryption button, this app will ask you to enter a password that will also be used to decrypt the data.

It does not take long to encrypt the plain text, as well as encrypting other types of files. To file size of about 20 MB, it only takes a few seconds. Encrypt speed also depends on the chosen algorithm. By default, the encryption result is stored in the same location as the original file, but we also can change it as you wish.

By default again, VSEncryptor add a new file extension that is <. Encrypted> for encrypted items. The same option is also available to decrypt the file, only the extension form <. Decrypted>.

Lenovo ThinkPad T440s, Ultrabook 14 Inci Full HD Artikel Baru CPU Intel Haswell & baterei 3-Cell Ganda

The presence of an Intel Core 4th generation Intel Haswell known as it turns out has brought blessing for Lenovo to immediately roll out the latest models of ultrabook and ThinkPad T440S Lenovo ThinkPad claimed to be the first to adopt the power of the processor.

Unlike most existing ThinkPad notebooks, Lenovo ThinkPad T440S This is more aimed at business users. Especially with premium features that it has, making super thin and light laptop is equipped with resilient chasing bandage made ​​of carbon fiber and magnesium, water resistant keyboard, touchpad with support for 5-point click and gesture, and a pointer nub in the middle of the keyboard , as well as dual battery setup that allows you menggonta-replace (remove plug) one batereinya without first turning off the existing system.

Lenovo ThinkPad T440s is itself supplied by feature 14-inch LCD screen with a resolution of 1600 × 900 option pixels or 1920 × 1080 pixels (HD + / FHD). As for users who want greater convenience in operation there, Lenovo also provides a choice of touch screen and support for NFC wireless technology.

Not only that, the article is a laptop with a 0.83-inch thick and weighs 1.5 kg has also been equipped by Mini DisplayPort and VGA, 3 USB 3.0 ports, 4-in-1 SD card reader, combo jack, and a smart card reader , HD anti-noise microphone dual stereo speakers with support Dolby ® HomeTheater ® v4 and two standard 3-cell battery that could have staying power usage up to 6 hours.

While about availability and price, unfortunately not yet known specific info related to it so far.

MontaVista Software Extends Support for ARM® Architecture Targeting Telecom and Networking Markets

SAN JOSE, Calif., July 2, 2013 /PRNewswire/ — MontaVista® Software, Inc., the leader in embedded Linux®commercialization, today announced Carrier Grade Edition® (CGE) support for the Carrier Grade Linux 5.0 profile for ARM architecture.  This milestone marks the first CGL registered product to support the ARM architecture. The tidal wave of smart phone and tablet usage has created a situation where mobile broadband demand is outpacing infrastructure capability. Carriers are racing to expand capacity while reducing the power required to run the mobile broadband telecommunications infrastructure.  For almost a decade, Telecom OEM and carriers have defined their Linux requirements using the Carrier Grade Linux specification.  MontaVista has bridgecd the gap between next generation silicon on ARM and Telecom Linux requirements.

“To support ARM-based SoC designs for carrier and cloud equipment, we recognize the importance of carrier grade software platforms to be in lock step with those silicon implementations, as this will accelerate time-to-deployment for highly reliable, available and secure next-generation equipment,” said Bob Monkman, manager, Enterprise Networking Segment for ARM. “MontaVista pioneered the Carrier Grade Linux movement, and it continues to be a leading innovator for this software platform that remains the crucial benchmark for network equipment and data centers alike. This milestone is another proof point that the necessary software ecosystem is in place for ARM-based systems to deploy into the global communications network.”

MontaVista’s Carrier Grade Edition is designed for high reliability infrastructure markets. CGE is the standard foundation of a Linux based platform, certified to meet performance requirements, high availability, serviceability, hardening, and real-time response.  The CGE multi-architecture platform allows customers to cross compile across all major architectures knowing they have met all CGL, LSB, and IPv6 requirements.  Only MontaVista provides a Carrier Grade Linux cross-architecture platform that allows telecom & network equipment manufacturers to cross compile from other architectures to ARM for their next-generation devices.

“As the provider of the world’s most widely-deployed Carrier Grade Linux, MontaVista is committed to supporting the ARM ecosystem with certified and high-performance operating systems.” said Patrick MacCartee, Director of Marketing for MontaVista Software. ”

MontaVista is bridging the gap between IT and Telco Linux operations systems by providing leadership in the Linaro Network Group (LNG), where it sits on the steering committee, as well as being part of the Carrier Grade Linux community. Our unique experience enables us to provide the best-in-class platform for ARM in cloud and carrier infrastructure applications.  MontaVista is supporting the ARM architecture for a range of applications in the telecom supply chain.  Work is underway to provide KVM-based virtualization to enable cloud-based solutions for mobile core and data plane on the ARM architecture.

“MontaVista has led the way in providing Carrier Grade Linux (CGL) support since the first Requirements Definition document in 2002,” said Mark Orvek, Linaro VP of Engineering. “We’re pleased to see MontaVista Linux Carrier Grade Edition listed by the Linux Foundation as the first distribution to implement the CGL specification on the ARM platform and we’re very happy to be working together with MontaVista and the other industry-leading members of the Linaro Networking Group to develop the future of Linux on ARM in this space.”

MontaVista is committed to compliance with the major industry standards and maintains its position of being the only Linux distribution in the world to comply with the three key requirements issued by the industry’s major standards bodies: CGL, Linux Standard Base (LSB), and IPv6. MontaVista’s Carrier Grade Edition is also the only embedded Linux to be Oracle-certified. This certification demonstrates MontaVista’s ongoing and continued commitment to CGE interoperability with industry software and hardware, and meets the rigorous demands of current and future multi-core network infrastructures. MontaVista has made available copies of the CGL5 registration documents on its website at http://www.mvista.com/products/cge/cgl/cgl.php.

“We applaud MontaVista’s continued leadership with Carrier Grade Linux for the carrier infrastructure market,” said Amanda McPherson, vice president of marketing and developer services Linux Foundation. “MontaVista’s support of the CGL specification for the ARM architecture will be key to enabling a smooth migration to this important SoC architecture that supports millions of devices worldwide.”

“Carrier grade” is a term for software and hardware products that support public telecommunications and data communications networks. Carrier grade products require extremely high degrees of reliability, scalability, and performance to provide an uninterrupted flow of the enormous volume of high-bandwidth data and voice needed for today’s multimedia communications. MontaVista Linux Carrier Grade Edition is the most widely deployed carrier grade Linux solution in the world, and is used by leading network equipment providers (NEPs) including Alcatel-Lucent, Motorola, NEC, and other leading suppliers.

About MontaVista Software
MontaVista Software, LLC, a wholly owned subsidiary of Cavium, Inc. (CAVM), is a leader in embedded Linux commercialization. For over 10 years, MontaVista has been helping embedded developers get the most out of open source by adding commercial quality, integration, hardware enablement, expert support, and the resources of the MontaVista development community. Because MontaVista customers enjoy faster time to market, more competitive device functionality, and lower total cost, more devices have been deployed with MontaVista than with any other Linux.