DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Avatar

John Esposito

Technical Architect at 6st Technologies

Chapel Hill, US

Joined Sep 2011

About

I write code and sometimes draw diagrams.

Stats

Reputation: 4149
Pageviews: 2.0M
Articles: 20
Comments: 58
  • Articles
  • Refcards
  • Trend Reports
  • Comments

Articles

article thumbnail
Easy Microdata in WordPress
HTML5 is making huge strides toward the semantic web -- and the semantic standards defined by the Google/Bing/Yahoo-backed schema.org are probably prudent standards to follow. But if we're talking prudence, practicality, and semantics, then we're probably talking CMS too -- not that coding isn't unremitting joy, of course, but there isn't really much point to coding semantics when you're using a CMS to manage content anyway. Drupal 7 already supports schema.org microdata, and I Lin Clark's excellent guide to managing microdata in Drupal. Which is great, unless you don't use Drupal -- and in fact there's a good chance you don't use Drupal, since WordPress, not Drupal, is the most popular CMS on the web. (Though not necessarily among developers, I'm guessing.) So for easy schema.org microdata management in WordPress, check out this new (free) plugin by Optimum7: The interface is really simple: pick an item, fill in the properties. Done. Currently the only item types supported are: Event Person Organization Review Place Product which is a lot less than the full set listed on schema.org. In the future you'll be able to add more..but those six are some of the biggest anyway. Read the full plugin announcement here, or download the zip file if you already know you want it.
May 23, 2023
· 6,459 Views · 2 Likes
article thumbnail
Interviews from the ALM Forum: Like SETI@home, but For Your Builds
Check out John's interview with Dave West at the ALM Forum as well Remember SETI@home -- the screensaver that uses your idle CPU cycles to find an extraterrestrial needle in a radio-noise haystack? Well, now you can volunteer your CPU time for loads of other scientific projects. One of the coolest is folding@home -- coolest, I think, because Turing machines are great at double-helixes but not so great when degrees of freedom are determined by the interaction of huge, complicated molecules plus environmental chemistry -- like, say, folding proteins. So that's cool science stuff, saving lives, finding aliens, whatever. But everyday development needs lots of computing power too. What if you could use the same distributed system -- leveraging whatever idle CPU time is available on your local network -- for builds? Okay, of course that was a rhetorical question. Of course you can. Easily, it turns out. But how? At the last ALM Forum I spoke with Dori Exterman, CTO of Incredibuild, a tool that turns your local network into a distributed supercomputer for pretty much any compute-heavy dev tasks. We talked about how such a tool can help you -- but also about how it works, and the answers are pretty neat. This was one of the most exciting interviews I've conducted in a while. Dori is a really smart guy with a really powerful product. Check it out and let us know what you think.
May 23, 2023
· 5,206 Views · 2 Likes
article thumbnail
iOS 5 Does HTML5 Brilliantly
If you're developing for iOS, you're probably particularly interested in how iOS handles HTML5. Even if you love Flash, you might want to think about redirecting some of that affection, given today's news from Adobe. Good news for iOS developers, though: iOS 5 handles HTML5 really, really well. In fact, after firing a battery of HTML5 tests at iOS 5, Sencha concluded in no uncertain terms: Mobile Safari continues to hold the crown as the best mobile browser, providing the best HTML5 developer platform. That's a pretty ringing endorsement. HTML5 Canvas is particularly impressive. Sencha's testers report: In iOS 5, Canvas is between 5x – 8x faster. We tried two examples to see this work. First, the IE HTML5 Speed Reading Test. In iOS 4.x, the draw durations last roughly ~850ms, versus iOS 5, where they are a constant 10ms. Blaze.io agrees, as this video vividly demonstrates: iOS 5 also added support for Web Workers, which run JavaScript threads in the background in order to keep the main thread free. Sencha tested Web Workers on iOS, and their results came back fine. WebGL works too -- officially for iAds only, although there is a workaround (but the resulting app can't be listed in the app store). For more on these upgrades, consider the quick tabular overview of mobile HTML5 support at mobilehtml5.org; read Sencha's discussion of their test results, which concentrate on HTML5; or check out Blaze.io's full performance report, which addresses non-HTML5 improvements too.
May 23, 2023
· 8,839 Views · 1 Like
article thumbnail
HTML5's IndexedDB: Transactions Tutorial
Last week I wrote a brief introduction to Kristof Degrave's ongoing, multi-stage IndexedDB tutorial. Judging by the number of reads, it looks like quite a few of you are interested in learning more about HTML5's IndexedDB. I'm following Kristof's tutorial anyway, so I might as well keep posting about it here. Today Kristof has posted his next IndexedDB tutorial -- Transactions -- and here's where IndexedDB begins to get exciting, where the work of creation and definition begins to pay off. We're preparing for actual data retrieval and manipulation, so we'll be creating a READ_WRITE transaction. At this point, if you're trying to understand IndexedDB formally as well as use it pragmatically, you might want to get more comfortable with W3C's conceptual treatment of transactions along with the formal object description, and maybe the IDBTransaction interface too. (For me, it especially helps to understand emerging tech like HTML5 a little more abstractly, just in case the standard takes a different turn than previously expected.) If you prefer learning by doing, here's how Kristof explains transactions: Today, I’ll handle the transaction subject. As said in previous posts, every request made to the database needs to be done in a transaction. So for every read or write request we need to create a new transaction. There for we need a database connection and 2 argument that we will pass to the transaction method. The post is, like his previous tutorials, quite straightforward -- painlessly showing you how to use what is potentially one of the most powerful features of HTML5. Take a look, create an IndexedDB transaction, and get ready to retrieve and manipulate data.
May 22, 2023
· 8,892 Views · 1 Like
article thumbnail
HTML5 < time > element: returned!
Well, after hubbub, including some here at DZone, the HTML5 element has returned. Paul Cotton, on behalf of the chairs of the working group, issued a revert request -- and his explanation is interesting: The Chairs have received multiple requests to revert change r6783. This change is related to bug 13240 [1] which was never sent to the HTML WG since it used a possibly incorrect Bugzilla component. Since WG members were NOT notified of the creation of this bug the Chairs have decided that this change should be subject to the Enhanced Change Control rules in the WG Decision Policy [2]: "Therefore during a pre-LC review, or during a Last Call, feature additions or removals should only be done with sufficient prior notice to the group, in the form of a bug, a WG decision, or an on-list discussion. This applies only to LC-track drafts and does not apply to drafts that may include material for future versions of HTML." We therefore ask for a revert of this change to be completed no later than the end of day on Tuesday 8th of November. If this revert is not complete by that time, we will instruct W3C staff to make this change. In other words: people don't like it, and we never really meant to approve, and we're not really sure how it got through in the first place. Now, the decision policy quoted sounds as though it would not invalidate the change, since the 'bug' was listed (and discussed) since July. I don't know what 'possibly incorrect Bugzilla component' means -- did they actually find something misconfigured in Bugzilla? -- but the vague hedging on 'possibly incorrect' raises my suspicions a bit. The meeting minutes don't help much (though it's neat to glimpse at how these conversations go). After the decision, a proposal to modify the reverted element was posted on the W3C wiki. This might map the near future of , so it's worth checking out for that reason alone -- though also, again, to help understand how HTML5-spec decisions are made. But however it happened, is back. So: did the W3C WG actually bow to popular outcry? or was there really just a bug in their bug-review system? I don't know, but I'm curious. What do you think? Update: Discussion has re-opened in the original bugpost since the revert command came through -- some deductive, some inductive. Results from the blekko web grep mentioned in the last comment might be very interesting...
May 22, 2023
· 10,190 Views · 1 Like
article thumbnail
HTML5 on Android 4.0: Way Better, Still Behind iOS 5
So affirms Sencha, in the latest installment of their HTML5 developer scorecards series. Four-sentence version: After putting the Galaxy Nexus through our test wringer, we can say that Ice Cream Sandwich is a major step for the Android browser. However, it still falls short of iOS 5. It’s a solid browser for normal page browsing and it adds major new features that support most of the HTML5 spec. It also has taken a big step forward in correctness of rendering, which is a welcome change for people who want to push their mobile browsers to the limit. The most exciting new feature support, in Sencha's opinion: tons of CSS3, including the more nativey-slick, like animations, refletions, transformations, and transitions. Some specific missing features: Web Workers Web Sockets WebGL datetime and range input types overflow-scrolling Shared Workers The device Sencha used was a Samsung Galaxy Nexus, which meant that some performance and zoom issues might tell you as much about the hardware as about the OS. But the biggest rendering improvement: rendering was simply correct. One way Ice Cream Sandwich beat iOS 5? Embedded inline HTML5 video. They actually played inline on the Galaxy Nexus, in Sencha's tests; they didn't on the iPad and iPhone running iOS 5. Here's Sencha's rather glowing closing summary: In summary, the Galaxy Nexus and Ice Cream Sandwich are a major step forward for the Android platform. Feature by feature, HTML5 support has gotten much better, rendering has become more accurate, and performance has gotten much faster. Although still behind the current HTML5 gold standard of iOS5, Android 4.0 is night and day compared to previous versions. That 'night and day' is pretty strong, and definitely great news for HTML5 developers. If you're developing HTML5 apps for mobile, you should probably read the full report, which includes JavaScript performance numbers via SunSpider, Acid3 scores, and detailed results of Sencha's own touch-specific test suite.
May 22, 2023
· 16,640 Views · 1 Like
article thumbnail
HowTo: Store and Retrieve Images in a SQL CE Database on Windows Phone Mango
Serious local database support is probably one of the coolest new features of Windows Phone 7.1(5). For the Windows Phone developer, it's not hard to create a local database, or add some columns, indexes or tables. But if you're using a SQL CE database then you are, after all, developing for a phone. And one of phones' most exciting powers isn't their hard drives -- it's their cameras. And it turns out that Mango makes storing camera photos -- or any image data for that matter -- pretty easy. To see how easy, look at this HowTo from Anton Swanevelder, posted a few days ago on his blog. Anton breaks SQL CE image-storage into three steps (the CRU in CRUD), and every step takes less than 20 lines. For example, you can create a column to store image data like this: [Column] public byte[] ItemImage { get { return _ItemImage; } set { if (_ItemImage != value) { _ItemImage = value; NotifyPropertyChanging("ItemImage"); NotifyPropertyChanged("ItemImage"); } } } The other two steps are more interesting (converting a camera stream to a storable byte array, then converting the byte array to a bitmap markup-able in XAML), but no more difficult. Read the full post for the full implementation.
May 22, 2023
· 10,699 Views · 2 Likes
article thumbnail
How to Write a Standard: An Inside View of the CSS Working Group at W3C
Suffering a little whiplash after the rapid-fire removal and return of HTML5's element, I became curious about how the working groups at W3C actually, well, work. In particular, I noticed something about the wording of Steve Faulkner's original revert request: the editor of the HTML5 specification has made a change to the specification that is not supported for good reasons (see below, source: http://willyou.typewith.me/p/9Zl7I2dOKs) I therefore request a revert of this change http://html5.org/r/6783, so that it can be further discussed and decided within the consensus based HTML WG process. Emphasis (er, offset) added. The editor-vs.consensus theme chimed with an early, rather severe response to the original decision, calling Hickson's move 'self-contained'. Okay, everybody likes consensus, especially about standards. But the once-upon-a-time student of decision theory and commitment devices in me perks up skeptically at (even implicit) accusations of unilateralism. Lucky for me, an Invited Expert from the CSS Working Group at W3C has already posted a thorough treatment of how the CSS group works. The inside-view really gives a better feel for how people really act in the CSS group -- more than, for example, the official charter and process document of the HTML Working Group (which are very top-down, as presumably documents of this sort must be). CSS isn't HTML, of course. But CSS is now being developed in modules, rather than tangled, monolithic versions; and one of the differences between W3C and WHATWG (the 'other' HTML5 standards group) is that W3C is maintaining the kinda-versioned 'HTML5' designation, while WHATWG now treats HTML as a 'living standard' (complete with an exacting list of differences between the W3C and WHATWG specs). So versioning is a bit of a thorny point in both HTML(5) and CSS, and the issue of versioning must deeply affect any standards-regarding decision-making process. Indeed, the 'Inside View' grants modularization a whole page to itself. The full site goes into a lot of gritty details -- interesting for anyone interested in decision-making at this level, but especially for anyone involved in defining new web standards. But most of us aren't defining new web standards. So, for the rest of us, here's an outline of how the CSS Working Group does its thing, in tl;dr form: People and Roles: module editors (in charge of each module) CSS WG members (inner group of discussants) www-style contributors (all other discussants) Communication: mailing lists (technical discussions; high volume; members follow closely) telecons (1hr, once/wk; chair presides, scribe takes minutes) face-to-face meetings (3 full workdays, 3-4 times a year; half in USA, half split between Europe and Asia; one meeting takes place along with other W3C groups; addresses deepest/hardest/complexest issues) IRC (side-discussions during official telecons; unofficial chats) internal mailing list (mostly just planning meetings and other administrative tasks; any technical discussion is immediately moved to the public www-style list) www.w3.org (homepage with specs and blog) dev.w3.org (editor's draft specs, with revision history) wiki.csswg.org (lots of stuff, technical and administrative; general-purpose, like any good wiki) test.csswg.org (subdomain=giveaway) Making Decisions (usually somewhat informal; for this one read the full treatment) Modularization (first formulated during 2007 CSS-WG meeting in Beijing; page includes history and rationale) Spec Process: working draft (with numbered iterations, until Last Call Working Draft) candidate recommendation (calls for implementations; this usually means lots of implementations already exist) recommendation (=finished; arrived at only after two correct independent implementations exist) Sources of Innovation (full post discusses three different sources for CSS3 Backgrounds and Borders) Makes sense to me. The site is much more discursive than this outline summary -- and the discursiveness gives a better feel for what it's like to participate in the WG, so the read is pretty fascinating.
May 22, 2023
· 5,692 Views · 1 Like
article thumbnail
How AMD's Heterogeneous Systems Architecture Works, and Why
(This article is the second in a two-part series leading up to the AMD Fusion Developer Summit, the only developer conference dedicated specifically to heterogeneous computing. Check out the first article for a conceptual overview, with extensive resource links.) Recently Anand Lai Shimpi hosted a community Q&A with Manju Hegde, Corporate VP of Heterogeneous Applications and Developer Solutions at AMD. The topic: Heterogeneous Systems Architecture, the standards-based, AMD-led effort to ease development of heterogeneous systems, especially CPU+GPU systems. Normally I'd just send you over to that most excellent Q&A -- but in this case the questions are so good, and Manju's answers so thorough, that you might not have a chance to read everything. So here's a detailed summary, with links to more in-depth resources: Differences between Fusion and HSA: Goals: Fusion: let developers use GPU along with CPU HSA: make the GPU a first-class programmable processor Specific HSA improvements: C++ support for GPU computing All system memory accessible by both CPU and GPU Unified address space (hence no separate CPU/GPU memory pointers) GPU uses pageable system memory (hence accesses data directly in CPU domain) GPU and CPU can reference caches of both GPU tasks are context-switchable (esp. important to avoid touch interface lag -- contexts switch rapidly in heterogeneous environments) (GP)GPU versatility: Non-UI use of the GPU is currently active at a basic level in security, voice recognition, face detection, biometrics, gesture recognition, authentication, and database functionality. But each task is currently GPU-routed. HSA will make GPU use in all these non-UI domains much easier in the next few years. C++ AMP and HSA: C++ Accelerated Massive Parallelism (AMP) is the Microsoft alternative to OpenCL. Both are excellent, and will fill similar roles within the larger HSA. Because C++ AMP does not represent a huge departure from C++, the AMP development learning curve will be relatively shallow. Gaming vs. compute performance: GPU architecture and production costs mean that there is usually an inverse performance relationship between gaming and pure compute performance. This means, in turn, that desktop (i.e., non-specialized) GPU design involves a careful balancing-act between gaming and compute performance (see for a technical overview of some reasons why -- it's more than just GPUs' excellent floating-point performance). AMD and developers: In the past, AMD tended to engineer products, and stop there. Now, because HSA involves a much more serious attempt to encourage heterogeneous systems development, AMD will be working more closely with developers to help them take advantage of (especially GPU) powers they might not have been able to use in the past. The advance of the APU: AMD has no grand strategy to promote APUs, even though they already make numerous different kinds of APUs. Every APU is designed as a response to a specific use-case. The advance of OpenCL:AMD is deeply interested in strengthening OpenCL itself, and to that end has recently driven these OpenCL initiatives: improved debugger and profiler: Visual Studio blogun, standalone Eclipse, Linux static C++ interface extended tools by close collaboration with MulticoreWare (PPA, GMAC, TM) OpenCL book and programming guide university course kit (for use with aforementioned book and programming guide) webinars self-training material online hands-on tutorials at the Developer Summit (select 'Hands On Lab' under 'Session Type') moderated OpenCL forum OpenCL training and service partners OpenCL acceleration of major open-source codebases Aparapi to make Java coders use OpenCL more easily The continuing (but receding) importance of device-specific GPU optimization: Roughly speaking, as GPUs become more General Purpose (GPGPU), the need to optimize for specific GPUs will approach the (real but relatively low) need to optimize for specific CPUs. The CPU-GPU bottleneck (or, whether to use PCIe 3.0 or on-die CPU/GPU integration): The impact of the bottleneck depends hugely on the algorithm. The problem of GPU physics: Simple techniques (resolution, antialiasing, texture resolution) scale graphics easily across many levels of hardware capability -- and this is how game developers have used GPUs in the past. Physics does not scale across hardware nearly as easily, so most developers handle GPU physics at the lowest (console) level. But HSA will make cross-hardware physics scaling much easier. HSA's benefits to small but parallel workloads (versus earlier GPGPU acceleration, which had disproportionately large effect on workloads with lots of data): HSA does not require cache flushing and copying between CPU and GPU, so the quantity of data shared matters much less than previous GPGPU acceleration attempts. HSA availability and AMD's long-term commitment to developers taking advantage of heterogeneous computing: AMD will continue to hold Fusion Developer Summits annually; is already partnering with Adobe, Cloudera, Penguin Computing, Gaikai, and SRS, and working closely with Sony, Adobe, Arcsoft, Winzip, Cyberlink, Corel, Roxio, and many more; and will continute to help make OpenCL development much easier. But the open-standard HSA is where AMD's major, highly ambitious effort in heterogeneous computing will lie, beginning in 2013-2014. HSA and HPC (high-performance computing): AMD is designing HSA-based APUs for both consumer and HPC markets. Penguin Computing will explain some of their HPC applications in detail during the upcoming Fusion Developer Summit (June 11-14). How software stacks will catch up with heterogeneous hardware: The HSA Intermediate Layer (HSAIL) will help facilitate this by insulating software stacks from individual ISAs. Why use graphics shading languages (OpenCL, DirectX) at all: Radical change must be evolutionary, not revolutionary (e.g., assembly -> C -> C++ -> Java). Existing codebases must be used effectively, not abandoned for code written in a theoretically perfect language (the 'software side' of heterogeneous computing). HSA is designed to help developers take advantage of their own skills and existing codebases at the same time. As several of these questions noted, the annual AMD Fusion Developer Summit is an essential component in the eventual rollout of the open-standard Heterogeneous Systems Architecture. No other conference covers heterogeneous computing specifically. The track list is amazingly broad, and the schedule incredibly ambitious. To GPGPU-wrestlers and non-wrestlers alike, heterogeneous computing is a thrilling, emerging technology. Learn more and consider attending the conference on June 11-14.
May 22, 2023
· 13,782 Views · 1 Like
article thumbnail
New IE10 Platform Preview: Now With More HTML5
It has often been remarked that IE is a headache for developers -- in part because Microsoft tends to prefer its own versions of web standards. With IE10, take advantage of the evolution of web standards with even more HTML5 access and features. The Retirement of Internet Explorer On the other hand, quite a few 'modern' web technologies -- including XHR -- were originally Microsoft innovations, only later adopted by..well, everyone else. Today, Microsoft has a brand new version, known as Microsoft Edge, as Internet Explorer 11 is set to be retired on June 15th, 2022, meaning there will no longer be support provided for users or developers alike. While in some fundamental ways, IE9 actually handled what it was meant to handle fairly well -- Chakra, for example, shows that IE9 uses less CPU time and RAM than Chrome, under Windows. Even today, Microsoft Edge clocks in at using around 600-800 MB of RAM compared to Google Chrome's 1.4 GB just for one browser window. Of course, in terms of support for emerging web standards, IE9 lagged well behind its competitors. IE9 may have done some things well, but it did not do as many things as other browsers do (even if it did some cool things, like pinned sites that other browsers don't). Advantages of Using IE10 While using Internet Explorer has not been in style for more than a decade, it is still used by many people from around the world, especially those who do not have access to updated software or hardware. Even if IE10 is not your preferred browser to work in, ensuring any website or software application that is programmed is compatible with Internet Explorer 10 is essential. Some of the advantages of using IE10 include: Speed: IE10 has been updated for improved performance and speed. With Internet Explorer 10, you can enjoy fast loading speeds when browsing and when developing projects of your own. User-friendly: Fortunately, the latest version of Internet Explorer is extremely user-friendly with an updated interface for easy and straightforward navigation. Tab implementation: With Internet Explorer 10, you can also use tabs easily and with minimal effort. Tab implementation helps developers to work in various areas with numerous websites open simultaneously without closing another window. Pinned sites: Use the new 'Pinned Sites' feature from Internet Explorer 10 to keep track of favorite websites or websites that are useful for resources and tools while programming. Memory usage: If you are concerned about how much memory your browser is using, IE10 is typically a safe and smaller choice compared to alternative browsers such as Google Chrome. On average, Google Chrome can use anywhere between 300 MB and 1GB of memory just while browsing online. With Internet Explorer 10, you have the ability to cut the memory usage in half in most instances. Flash add-on: There are enhanced add-ons available for Flash with Internet Explorer 10 for those who are interested in loading web pages with Flash or working with Flash during development. Development tools: If you enjoyed Internet Explorer 9's Developer Tools, you will be relieved to know that Internet Explorer 10 has kept the same F12 Developer Tools for quick and easy access. However, keep in mind that it is difficult to test a site that has been hosted on your machine with Internet Explorer 10 without a manual workaround. When using Developer Tools, Internet Explorer 10 automatically switches the pages you are working in into a compatibility mode by default, making it a bit more difficult to test pages locally without uploading them to your own web server(s). Drawbacks of IE10 Unfortunately, as with any piece of software, there are also drawbacks to using Internet Explorer 10, even for those who are truly committed to Microsoft's creations. The most notable drawbacks of using IE10 to program, develop, or browse include: Lack of Flash: With Internet Explorer 10, there is no built-in Flash player or Flash support, which can make it difficult to load websites with Flash players or with Flash animations built-in to the site itself. While Flash is not inherently supported by Internet Explorer 10, it is possible to use and access Flash-based websites and applications with an Internet Explorer 10 add-on designed for enhanced Flash performance. Lack of PDF support: Unfortunately, it is not possible to load previews or entire PDFs when using Internet Explorer, even when you are using the most updated version of the browser. There is no PDF reader with IE10 available, which can make it difficult to access a variety of documents, for work or for personal use. Outdated: Now that Internet Explorer 10 is being retired, it is considered an outdated piece of software. With little to no future updates being rolled out by Microsoft, Internet Explorer 10 will no longer provide additional features, add-ons, or plugins that may be relevant to the development or working with HTML5. IE10: Now With More HTML5 But Microsoft promised big changes for IE10, and made available a developer preview to prove it. Diving into HTML5 can provide an array of new features for developers and designers alike. This preview includes considerably expanded HTML5 support, and (most impressive to me) leverages hardware acceleration (ala Chakra) to speed up graphical technologies (e.g., SVG, CSS3 transforms) -- check out the embedded video in the full announcement. Microsoft highlighted a few specific HTML5 features, newly available in this developer preview: Cross-Origin Resource Sharing (CORS) for safe use of XMLHttpRequest across domains. File API Writer support for blobBuilder allowing manipulation of large binary objects in script in the browser. Support for JavaScript typed arrays for efficient storage and manipulation of typed data. CSS user-select property to control how end-users select elements in a Web page or application. Support for HTML5 video text captioning, including time-code, placement, and captioning file formats. Great operating system integration for various operating systems available today A Metro mode is also available for those working from touchscreen smartphones and tablet devices to browse and/or to develop Additional HTML 5 features supported by IE10 include: File reader API Forms validation Drag and drop solutions IFrame support for sandbox attributes CSS3 gradients Flexbox Improvements that have been made to the IE10 browser that are beneficial to developers include: The ability to run script (Web Workers API) in the background with frontend impact JavaScript 5 support (adds the method JSON.parse to objects) The browser's Flash player is no longer a plugin, but is a built-in element of the browser itself. The "Do Not Track" feature is automatically enabled by default, which can prevent ad trackers from harnessing web browsing data from users. The full documentation of IE10's HTML5 support is here, and the rest of the developer documentation here. While it seems like just yesterday, on October 26th, 2012, Microsoft released Internet Explorer 10, it's time for support has come to an end. It is important to keep in mind that further support will no longer be available for any versions of Microsoft's Internet Explorer following the official retirement of the browser on June 15th, 2022.
July 24, 2022
· 8,384 Views · 1 Like
article thumbnail
CSS3 Transitions vs. jQuery Animate: Performance
Animation Brings a Whole New World to Life Where would we be without animators helping us see the world in a whole new light? It seems that we easily forget the hard work that these people do, and that is truly a tragedy. There are so many people helping us see the world for the way that it truly is, and we should be thankful and respectful of the hard work that they are doing. That is why we also have to ask ourselves if we should be using CSS3 Transitions or jQuery Animate when looking for the right program to do our animation work in. You might think that animation is just something used to create children's movies and television shows, but that isn’t quite right. It has a wide range of uses in our world today (more on this later), and we are increasingly coming to rely on it to make critical decisions about how we move forward on some of the most important decisions that we have to make to keep our world moving forward the way that we want it to. Take some time to step back and recognize all that animation has brought us, and this will help you better understand why there are a lot of people who are pushing to figure out which software programs can bring them the best animation outcomes. Rich Bradshaw has written a detailed tutorial series on CSS Transitions, Transforms, and Animation. That alone is worth reading; but in case you weren't convinced, Rich also put together a little (and maybe a little unfair) performance comparison: A head to head comparison of CSS transitions and jQuery's animate. Rather than setting a timer to run repeatedly, transitions are handled natively by the browser. In my rather unscientific testing, transitions are always quicker, running with a higher frame rate, especially with high numbers of elements. They also have the advantage that colours can be animated easily, rather than having to rely on plugins. Putting it All to the Test The best way to compare the two programs against one another is to put them through a series of simulated tests that allow each program to prove what it has going for it. Believe it or not, that is exactly what some people have done. They work with the two programs to see how they adapt and react when put under various testing conditions. By doing exactly that, the programs must prove themselves in the sense that they are forced to show that they can keep up with rapid changes and shifts in the dynamics of the work thrown at them. These tests can all be sped up by using computer simulations to run the tests instead of putting actual work done by a human being through the testing period. You will find that there is a lot to be gained by going through all of this and figuring out just how much each system can potentially get through. You don't want to use something that is sub-par compared to the other programs on offer out there, and that is why we are all so grateful that there are people who are willing to work with both programs to test them and see what the ultimate outcome really is for each. Unsurprising Results You should not be shocked to learn that CSS3 Transitions is the better program to use. It has a more seamless transition from image to image, and the speed with which it can process all of that data is truly amazing as well. The fact that there are multiple programs available for us to choose from is a great thing, but there is really no competition between CSS3 Transitions and jQuery Animate. The CSS3 Transitions program has been around for a lot longer, and it has the ability to get results from its users much more easily. The results probably won't surprise you, and the conclusion is inevitable ('use CSS3 for animation when you can') but Rich's analysis (scroll down), using the Timeline view in the Webkit Inspector, is pretty neat: (Actually, the Timeline is pretty neat, period. I didn't know about it until now..sweet.) So check out Rich's test and performance discussion, and maybe use Webkit Inspector's Timeline for performance fine-tuning in the future. Animation Into the Future The tools that are used in the world of animation are going to continue to be called upon to help creators do the challenging work that they do. The demand for animated films and television series has not ceased, but the usefulness of animation goes far beyond being entertained. There are uses for animation in simulations of all kinds. Even the military uses animation to simulate certain battlefield conditions and other concerns that they know are relevant to their operations. It is something that has helped them sort out how they can best move forward with their plan of action whenever the need arises. Put another way, the military uses animation to make sure they are never caught off guard. Courtrooms are seeing an increasing amount of animation used in presentations put on both by prosecutors and defense counsel alike. They make reenactments of various aspects of an alleged crime using animation to help a jury see how they claim that things happened from their point of view. There are a lot of reasons why this helps juries to understand their point of view more completely. CSS3 technology is helping to make the use of animation more widespread and available to a larger number of people across all walks of life. It appears likely that this trend will continue, and there are many people who are counting on using CSS3 technology for several projects that they have in the works at this time. We should all celebrate the fact that such technology is making animation more accessible. Using the Best Programs Doesn’t Have to Be Costly One more thing to keep in mind when you look for the best programs for animation purposes is this: They don’t have to be costly. Many programs are free and/or open-source. Even the ones that you have to pay for aren’t necessarily wildly expensive. They provide a huge ROI when you put them to use, and that alone should get you to understand why they are so important to use. Make sure you consider this and consider the options that are before you when it comes to selecting an animation program that will do the work you need it to do. The last thing in the world that you want to have to happen is to produce lower-quality animation just because you insisted on trying to save a few dollars by going with a less expensive animation program.
July 24, 2022
· 23,279 Views · 1 Like
article thumbnail
From Relational to Really Relational: The RDB2RDF Working Group
While a lot of databases have been created listing information in a table format, the web isn't set up in a tabular style. Neither is plenty of data in a variety of formats that the web uses. However, many databases are still using tables, because many web developers feel that tables handle plenty of data better than any other structure. Others feel that the data tables known as RDB should be converted to RDF, a format used to gather an even wider array of metadata across the worldwide web. The ability to convert to RDF will be extremely beneficial as technology advances to Artificial Intelligence AI and beyond. What is an RDB? RDB stands for a Relational Database. An RDB is a collective set of multiple data sets organized by tables, columns, or records. An RDB establishes a well-defined relationship between database tables. The tables communicate to share information that makes it possible to search for data, organize, and report. RDB is derived from the mathematical function concept of mapping data sets as developed by Edgar R. Codd. RDBs use Structured Query Language, SQL.SQL is a standard user application that provides an easy programming interface for database interaction. RDBs organize data with each table known as a relation which contains columns. Each table row, or record, contains a unique data instance defined for a corresponding column category. The data and record characteristics relate to at least one record to form functional dependencies. RDB performs "select", "project" and "join" database operations, where select is used for data retrieval, identifying data attributes, and combining relations. Those who prefer to use RDBs do so because of the advantages, including easy extensibility or scalability, new technology performance, and data security. What is RDF? RDF is primarily used to provide information or metadata for data available on the Internet. RDF provides the methodology for specifying, structuring, and transferring metadata, and provides the basic XML syntax for software applications to exchange or use that information. The URI/URL provides the location of that data. RDF stands for Resource Description Framework and is a standard for describing web resources and data interchange, developed and standardized by the World Wide Web Consortium, W3C with Xtensible Markup Language (XML) and Uniform Resource Identifier (URI) serving as its distribution standards. Typically, RDF provides basic information and attributes about an Internet-based object, such as the name of the author, Web page keywords, object creation or editing data, or the sitemap. While there are many conventional tools for dealing with data and more specifically for dealing with the relationships between data, RDF is the easiest, most expressive, and most powerful standard to date. The overall informational value is much greater because context or intent can be inferred. RDF presents small chunks of information in a form that infers meaning. This can include rules about how the data should be interpreted. Resource Description Framework, RDF, is the standard for encoding metadata and other structured information on the Semantic Web. Semanticization Data With all the semantic standards and database-centered HTML5 APIs and a W3C standard that calls for implementations, this is an exciting time for data on the web. It's time to embrace RDF with the capacity to start pulling relational data into the semantic web! The Purpose of RDBMS The software used to store, manage, query, and retrieve data stored in a relational database is called a Relational Database Management System, or RDBMS. The RDBMS provides an interface between users and applications with the database. It also provides administrative functions for managing data storage, performance, and access. Semanticization, or giving meaning to, all data can be done in two stages. First, construct a web of meanings, not documents -- as Sir Tim Berners-Lee has always wanted, and as the RDF, Resource Description Framework seeks to do. Second, fit all tabular data into the web whether legitimately or not. This second step is less exciting than the first because plenty of tabular data is not ideally tabular. In these cases, the second step is rather backward-looking. However, it is no less necessary than the first for two reasons: Converting everything RDBMS to RDF is not even close to worth it Much data ought not to be converted to RDF All of this data still needs to talk to the web, which means it needs to be translated into a webby structure, ideally RDF. The easiest way to translate without conversion is, of course, just plain mapping. But mapping two rather different structures to one another is no small undertaking or trivial task. That's why there's a whole W3C Working Group devoted to devising a mapping language and actual mapping of relational data to RDF. Sir Tim offers this insight into the RDF-RDBMS relation, cutting through questions that might otherwise be couched in domain-inappropriate terms (like 'is the RDF model an entity-relationship model'): Relational database systems manage RDF data, but in a specialized way. In a table, there are many records with the same set of properties. An individual cell (which corresponds to an RDF property) is not often thought of on its own. SQL queries can join tables and extract data from tables, and the result is generally a table. So, the practical use for which RDB software is used is typically optimized for doing operations with a small number of tables, some of which may have a large number of elements. Because relational databases are species of the genus described by RDF, the basic mapping model is as follows: a record is an RDF node; the field (column) name is RDF propertyType; and the record field (table cell) is a value. So far, so straightforward. Of course, the implementations usually wander pretty far from the original concept. That's why mapping actual RDBMS to RDF takes a bit of dirty work. RDB2RDF Enter RDB2RDF. The RDB2RDF WG is doing the dirty work. Back in 2005, when the Group was still an Incubator, they published a detailed survey of then-current approaches to mapping relational databases to RDF. This survey served as the starting point for typically extensive discussion and debate, which culminated in two Candidate Recommendations: The RDB to RDF mapping language: R2RML The Relational-to-RDF mapping itself Many techniques and tools have been proposed to enable the publication of relational data on the web in RDF. RDB-to-RDF methods are one of the keys to populating the web of data by unlocking the huge amount of data stored in relational databases. Since producing RDF data with sufficiently rich semantics is often important in order to make the data usable, interoperable and linkable, there are various strategies developed to enrich data semantics. Turning RDB to RDF has proven to be of value when dealing with SQL databases. It offers a straightforward and practical system for relational database conversion into RDF. RDB2RDF and the Future Moving forward beyond RDB-to-RDF methods, it will become necessary to find a compromise between the expressiveness of RDB to RDF mapping languages and the need for updating relational data using protocols of the semantic web. Creating, updating, and deleting RDF data should only be made possible in a secure, reliable, trustworthy, and scalable way.
July 24, 2022
· 9,407 Views · 1 Like
article thumbnail
How Agile Are You? – The Results
An analysis of how much of DZone's audience lives up to the lofty goals of the Agile Manifesto.
July 23, 2015
· 2,283 Views · 0 Likes
article thumbnail
It's Time to Start Programming (for) Adults
This week we're in Boston at DevNation, an awesome, young (second ever), and relatively intimate (~500 attendees) conference on anything and everything hard-core, cool-and-hot (DevOps, big data, Angular, IoT, you name it), and of course -- since the conference is organized by Red Hat -- totally open-source. So far I've had in-depth conversations with five super-amazing engineers, attended several inspiring keynotes, and chatted with one skilled developer after another. We'll transcribe the deeper interviews shortly, including some on topics totally unrelated to this post. But meanwhile I'd like to offer some thoughts inspired by the first day of the event. The general theme is: we're just beginning to get serious about separation of concerns. The metaphor that keeps popping into my head comes from the first keynote: machines have finally grown up. Imperatives: telling really unintelligent agents what to do (and then they sort of do whatever they please) It is trivial to observe that computers are incredibly stupid. Turing's fundamental paper is about how to figure out whether a theoretical computer will keep calculating the values of a function until the heat death of the universe (okay that's a slight oversimplification). The fact that Edsger Dijkstra felt the need to rail gently against all goto statements in any higher-level language than machine code suggests that, in 1968, far too many computers needed instructions about how to read the instructions that tell them what to do in the first place. Richard Feynman's famous lecture on computer heuristics is the condescension of the man who conceived quantum computing to the level of functional composition (hmmm) and file systems (double sigh). Stupid agents need to be told exactly what to do. Then they need to be told to pay attention to the exact part of the command that tells them exactly what they have been told to do (dude, just goto line 1343 already and shut up). Then they don't do what you told them (optimistically we call this an 'exception'), and then you send them into time out / set a break point and try to figure out where the idiot state muted off the rails. They stare blankly at the wall / variable / register and either do nothing or repeat another unintelligibly wrong result until you notice that your increment is (apparently meaninglessly to you) one bracket too deep. You sigh and tell them what to do again, and after a while they hit age thirty (life-years/debug-hours) and maybe do something useful with their (process-)lives. Well, maybe I'm straining the metaphor a little here, but you get the point because it cuts too close to home. We spend far too much time fixing stupid mistakes that we didn't even know we were making because -- like all actual human beings -- we assumed that the agent we commanded will use their common sense to iron out those few whiffs of, admit it, frank nonsense that our step-by-step instructions will probably always contain. So, at least, goes the imperative programming paradigm. The machine does what you tell it to; and the universe collapses onto itself before the last real number is computed. Functions: reliable, predictable adults Time to give credit where it's due: I'm really just riffing on the metaphor Venkat Subramanian offered in his highly enjoyable keynote on The Joy of Functional Programming yesterday morning His not-so-smart agents -- the 'programmed' of imperative programming -- were toddlers. Since I don't have any kids, I can't presume to understand this experience fully (although I did grow up with three younger brothers..). But the general idea is: imperative programming is tricky because, when you spell everything out super literally, it's very hard to tell exactly why what you thought should happen didn't. Venkat's talk was a whirlwind of functional concepts, from the thrill of immutability to the self-evident utility of memoization. For random (Myers-Briggs?) reasons, the object-oriented paradigm never seemed very intuitive to me -- I've gravitated towards functional style even when the problem domain wasn't actually modeled very well by functions -- but Venkat's side-by-side implementations of simple calculations in OO and functional Java showed the readability delta very clearly. Functional code is beautiful because it looks like its purpose. It tells you flat-out: here is what I do; and then it does it. But immutable functions are also beautiful because they do exactly the same thing every time. I couldn't count on my two year old brother very much at all because given a certain input I had pretty much no idea what would come out. But we all count on our grown-up collaborators to output exactly what they should, given a definite input, predictably and reliably every time. Of course, people also do more than expected -- every intervention of intelligence is an injection of creativity, not generated by the definition of the function -- but at least they do what you need them to do and no less. Containers: grown-ups with good boundaries I'm picking out just one aspect of the resurgent 'joy' of functional programming because the renaissance of containerization (another 'old' technology that is just now really taking off) is, I think, a part of the same shift toward, let's say, treating computers as adults. If functions are reliable agents, then applications in well-defined containers are self-sufficient agents who know exactly what they need from others and neither require nor demand anything more. If apps on dedicated VMs are teenagers negotiating personal boundaries by waking/booting up independently (and taking far too long -- and far too many resources -- to do so, given their meager output) -- or bubble boys, isolated in ways that are unfortunate in order to isolate in ways that are absolutely necessary -- then containerized applications are subway-riders who jam into the train without offending anyone or campers who can live anywhere with just a backpack of just the stuff they need. Of course, subway-riders and campers do more than just not-mess-up. But what's kind of neat about containers is that -- like an adult with good boundaries -- clearly defined bounds and interfaces free up the application / mind to do whatever world-changing thing the developer / human has cooked up. I'll come back to this metaphor in a later article. (Mesh networks, SDN, and ad-hoc computing are all part of the same picture, I think. Kubernetes probably is too, along with event-driven and reactive programming, the actor model, dreams of Smalltalk, and of course REST, at least of the HATEOAS flavor.) But maybe this isn't a good way to think about some of these recent sparks in devworld within a single paradigm -- and maybe my perpetual discomfort with OO is influencing me too much. What do you think?
June 23, 2015
· 1,469 Views · 1 Like
article thumbnail
The Programming Challenges of IoT
Pragmatic developers can look at the Internet of Things in two ways: This is amazing. I can only begin to imagine how I can directly improve the world outside the set of networked computer boxes. This is terrifying. If something goes wrong, then it’s on me—and this time the system affected extends outside the set of networked computer boxes. IoT is amazing in the way it bridges physical and virtual environments, but even the phrase “Internet of Things” should give a developer pause. Computers are pretty smart. Things are stupid. IoT tries to put Things online and tries to make them into inter-networked computers. That’s pop-philosophy, but you want to develop in the real world. So what real-world challenges will you face when you shoot for the IoT moon? Two Types of Challenges It seems there are two types of programming challenges for the Internet of Things: Data and control (the comp-sci and networking stuff) Information and business logic (the info-sci and human-computer interaction stuff) For this article, we’re going to talk about the programming problems we can solve around IoT. We’ll start at the bottom (data and control) and work our way up to the big picture (information and business logic). Type 1: Data and Control Challenge 1.1: Power This one is pretty obvious. Many IoT devices are wireless, and no one has invented thumbnail fusion reactors yet. One solution is equally obvious: pick your algorithms carefully. If you can save cycles to perform a given task, then do it. Libraries for implementing power-optimized algorithms will presumably spring up in greater numbers, but even so, you may need to inject some heavy-duty comp-sci know-how into IoT app development. The second solution is more complex than the first. Higher-level developers will have to think more about Dynamic Power Management (DPM), which just means: shutting down devices when they don’t need to be on and starting them up when they do. Normally the operating system worries about this, but an IoT application that orchestrates wearables and phones, for example, will know things that each device’s OS won’t—and therefore will be able to switch things on and off more intelligently than each device’s individual OS. Another option is to write or customize an embedded OS. Challenge 1.2: Latency Latency on IoT sits in two places: at the source and in the pipes. The basic problem is a physical one. Thing-chips often have to be small, which means that the chip can only be as powerful as current transistor technology allows. Another problem is power. Many small devices transmit and receive data in discrete active/sleep cycles (think TDMA) in order to save bandwidth and power, but this increases latency inversely to power saved. Another tradeoff is that network topologies optimized for IoT can involve more hops over slower devices. Mesh networks, for example, are immune to the failure of a few nodes. Similarly, “fog” and “edge” computing paradigms relieve Internet infrastructure by doing as much as possible without hub-nodes. The downside is that each node (a) can’t do very much on its own and (b) can only talk to neighboring nodes. The problem in the pipes is a matter of network infrastructure. Simply: the more Things, the less available bandwidth. Infrastructure technology will get faster, but cell networks won’t catch up overnight. And Things, unlike fancier computers, are often supposed to transmit blindly—that is, without anyone necessarily asking them to. This means there’s a massive potential for wasted bandwidth. Challenge 1.3: Unreliability The third challenge flows from the first two. Devices are unreliable–“Things” even more so. The distributed and decentralized virtues of IoT bring their own reliability problems. Here are just a few: Ubiquitous devices are cheap, so they fail more often. Truly ad-hoc connectivity implies ephemeral SLA, so uptime and recovery time may be unclear. Loosely controlled devices may have better things to do than give you their data (or computing resources), so concurrency may grow very complex. Less-reliable hardware generates less-reliable information (‘does my outlying datapoint just signify device failure?’), so you may need to chew your data more thoroughly at the application level. In a sense, IoT decouples low-level (the sub-session layer) from high-level channel capacity, because the distribution of error-sources on IoT is more heavily weighted toward originating or remote nodes. This means more error-correcting for application developers. Type 2: Information and Business Logic Challenge 2.1: Vast & Thin Data Sensors on smartphones are already generating oceans of raw data. These sensors are pretty sophisticated. Every major mobile OS provides a unified, simple API to access clean sensor and geo data. But start grabbing this data and it’s not immediately clear what to do with it. Try to think of killer applications for barometric data—besides weather and elevation (with GPS)—off the top of your head. Raw sensor data is extremely thin. It doesn’t explain itself, and we haven’t yet produced a complete mapping from physical measurements to business logic—let alone software design. Even if you know what to do with sensor/geo data eventually, you may have to learn new algorithms and data structures to process immediately. Geo-graphs aren’t CS101 graph data structures (for one thing, edge length is a first-class citizen of geo-graphs). The size of data over IoT is itself a problem. Wireless sensors beget tons of data. All the problems (and opportunities) of Big Data cascade naturally from IoT. Massively distributed computing on IoT devices is an exciting thought, but the toolchain for splitting calculations over a thousand idle Fitbits just isn’t here yet. 2. Context-Sensitivity Consider the term “ubiquitous computing,” defined as: what happens when wirelessly connected sensors and actuators, placed more or less everywhere, allow software to interact with much larger swaths of the physical world than just hardware or bare metal. Put ubiquitous computing on the Internet, and IoT makes the software context much larger. This has implications at two basic levels. At a high computer-architectural level: IoT extends the concept of computing environment well outside the von Neumann machine and weakens the concept of peripheral I/O. In an IoT-world interface, sensors are input and actuators are output. As IoT devices process increasingly at the edge (within individual nodes), the devices that appear peripheral to other nodes are actually doing an awful lot of computation. At a high business-logic level: the more stuff outside the computer-box affects the program, the less predictable the program behavior becomes at runtime. The same bizarrely-birthed memory leak might slow down the UI in a smartphone context but contribute to a cascading electrical grid failure in an IoT context. This means that IoT demands more self-monitoring and self-repairing code. Two Types of Solutions Plenty of researchers are working on ambitious solutions to the programming challenges presented by IoT. Two of the more exciting examples include: Abstract Task Graph—a data-driven model that maps the network graph to an application graph [1] Computational REST—replaces content resources with computation resources [2] There are also a few more strategies you can use right now to solve some of the IoT programming challenges mentioned above. Reactive ProgrammingThis general purpose paradigm responds to all major application-level challenges and embraces opportunities presented by IoT. The four definitive attributes of a reactive application are: event-driven, scalable, resilient, and responsive [3]. These four are excellent guiding principles for IoT applications at a high, cross-stack level. Flow-based Programming and the Actor ModelBoth present strongly independent components where only messages can affect processes. Both are deeply amenable to concurrency (for example, shared state is discouraged), nondeterminism, and scaling without exponential complexity growth, because components are black boxes. FBP is a bit more pragmatic and restrictive while the actor model is less restrictive and a bit harder to implement. FBP has already been implemented in Javascript (NoFlo), and the actor model has been implemented in Java (Akka) [4][5][6]. What’s important to remember is that there are already tools and techniques that can help you build IoT applications. FBP, actors, and reactive programming all have key attributes for creating applications that leverage the strengths of IoT to overcome its challenges. [1] https://www.usenix.org/legacy/event/mobisys05/eesr05/tech/full_papers/bakshi/bakshi.pdf [2] http://isr.uci.edu/tech_reports/UCI-ISR-10-3.pdf [3] http://www.reactivemanifesto.org/ [4] http://jpaulmorrison.com/fbp/ [5] http://arxiv.org/ftp/arxiv/papers/1008/1008.1459.pdf [6] http://noflojs.org/ [7] http://akka.io/ 2014 Guide to Internet of Things The 2014 Guide to Internet of Things covers 39 different IoT SDKs, developer programs, and hardware options, plus: Key findings from our survey of over 2,000 developers "How to IoT Your Life: The Complete Shopping List" "The Scale of IoT" Infographic Glossary of common IoT terms Four in-depth articles from industry experts DOWNLOAD NOW
August 14, 2014
· 15,311 Views · 0 Likes
article thumbnail
Interactive 3D Dodecahedron in CSS3
Thursday's CSS3 bitmaps were clever and fun, but a little counter-HTML5-cultural: the whole point of SVG, Canvas, and so forth, is that vectors are better, because simpler, than bitmaps. Today's interactive geometric CSS3 shape is just the opposite: far more pixels than pre-rendering could possibly justify, emphatically composed of 2D surfaces, and fully animated in 3D. It's a folding/unfolding dodecahedron (not in FF/IE): On to the code: Because it's a simple shape, the div-organization is too plain to be interesting. The CSS is more satisfying -- with each side-pentagon transformed around x, y, and z axes, as dodecahedronity requires: #dodecahedron.closed #group5 { -webkit-transform: rotateZ(-324deg) rotateX(63deg); -moz-transform: rotateZ(-324deg) rotateX(63deg); transform: rotateZ(-324deg) rotateX(63deg); } and each pentagon defined with gratuitously pleasing transparency: .p2 { position: absolute; left: 0px; top: 0px; width: 0; height: 0; border-bottom: 59px solid #ff0000; border-left: 81px solid transparent; border-right: 81px solid transparent; } The JavaScript is what you might guess after a few seconds' interaction -- but is written efficiently and clearly enough to merit a look. Worth checking out, as an excellent, direct instantiation of several cool CSS3 elements.
January 6, 2012
· 10,807 Views · 0 Likes
article thumbnail
HowTo: Build a VNC Client for the Browser
VNC is just a special case of client-server, though perhaps an especially cool one. Quite a few rising web technologies do robust client-server work extra well (Node.js, WebSockets, etc.) -- and in-browser VNC is nothing new. Here are two (open-source, of course): noVNC is more ambitiously HTML5-duplexed, using WebSockets as well as Canvas. It's quite popular, and has its own 10-page Github wiki. Also supports wss:// encryption. Use this if you want a reliable, battle-tested HTML5 client. (WebSocket fallback is provided by web-socket-js.) vnc.js was written in 24 hours, during LinkedIn's first public Intern Hackday. So of course it hasn't been tested thoroughly, and probably could be written a little more cleanly. But there's something beautifully coherent about an app written in a single session. If the app really does work, then some of the decisions will make a little more sense -- it's possible to get into the developer's mind a little more easily -- and breaking down the code doesn't result in as many 'why did they do this??' moments, because the developers' minds were never far from any part of the project, at any moment during development. vnc.js doesn't use WebSockets (it uses Socket.io instead), but that's fine -- a little less HTML5, a little more slick JavaScript doesn't hurt anyone. Plus the marathoning hackers behind vnc.js put together a sweet little tutorial detailing the decisions made that 24-hour period, emphasizing the rapid thought-process behind the architecture (in clear diagrams), and a very practical abstraction for easier in-browser work with TCP (using Node.js and Socket.io) and RFB. Both packages are worth checking out; the hacking tutorial is a fun read for any web developer interested in coding a VNC client, or even just sophisticated with with different network protocols in the browser.
December 30, 2011
· 18,696 Views · 0 Likes
article thumbnail
HTML5 Canvas + WebSockets = Multiplayer Space Shooter In Browser
Recently I ran across Rawkets, a slick site taking two emerging web technologies -- HTML5 Canvas and WebSockets -- and combining them in the most obvious way possible: a multiplayer space shooter. Why Canvas? No plugins -- graphical Yes; and why WebSockets? Low latency -- multiplayer Yes. Sadly, every time I join the game, nobody else is there. If I wanted single-player HTML5 gaming, I could check out another project by Rawkets' creator, Rob Hawkes: straight-up Asteroids, using the HTML5 game engine Impact. But WebSockets won't help Asteroids, because Asteroids runs totally on just one client. Rawkets, on the other hand, has multiple clients running Canvas, with their own JavaScript, connecting via WebSockets, all taking through Node.js on the server, producing something like this: I can't tell whether the game is any fun, because I've never seen anyone else in there. (Also, it doesn't seem to work in Chrome). But as a tech demo it's a cool idea, and conceptually straightforward enough to inspire. (If you're impressed, Rob also links from the game site to to his HTML5 Canvas book -- though apparently the book assumes virtually no knowledge of Canvas or JavaScript, and doesn't progress all that far.) Check it out, and maybe shoot someone else's ship down -- fairly fairly, of course, because WebSockets will keep multiplex channels persistently open...
December 26, 2011
· 9,348 Views · 0 Likes
article thumbnail
Eventual Consistency in NoSQL Databases: Theory and Practice
One of NoSQL's goals: handle previously-unthinkable amounts of data. One of unthinkable-amounts-of-data's problems: previously-improbable events become extremely probable, precisely because the set of interactions is so large. Flip a coin a hundred times, and you're not likely to get 50 heads in a row. But flip it a few trillion times, and you probably will find some 50-heads streaks. So NoSQL's performance strength is also its mathematical weakness. This order of scale can result in lots of problems, but one of the most common is consistency -- the C in ACID -- clearly a fundamental desideratum for any database system, but in principle much harder to acheive for NoSQL databases than for others. Emerging database technologies have forced developers and computer scientists to define more exactly what kind of consistency is really needed, for any given application. Two years ago, ACM (the Association for Computing Machinery) published an extremely helpful examination of the attenuated notion of consistency called 'eventual consistency'. Their summary: Data inconsistency in large-scale reliable distributed systems must be tolerated for two reasons: improving read and write performance under highly concurrent conditions; and handling partition cases where a majority model would render part of the system unavailable even though the nodes are up and running. The article surveys technical solutions as well as user considerations that might soften the undesirability of anything less than perfect, instantaneous consistency. It's not long (4 pages plus pictures), and explains some deep database issues quite clearly. On the more practical side of the problem: Russell Brown recently gave a talk at the NoSQL Exchange 2011 on exactly this topic. More specifically, he showed how some distributed systems (Riak in particular) try to minimize conflicts, and suggested some ways to reconcile conflicts automatically using smart semantic techniques. Check out the NoSQL Exchange page for Russell's talk here, which includes an embedded video. But read the ACM article first for a broader overview, since Russell launches into technical details pretty quickly.
November 22, 2011
· 11,342 Views · 0 Likes
article thumbnail
Updating the Duct Tape for HTML5: Websockets in Perl (Mojolicious)
Perl was easy to use, wildly popular, and lots of fun. The Camel Book introduced many coders to a powerful new language (and the whimsically-covered O'Reilly series), and offered access to web programming via CGI. Plenty of people still develop in Perl ('the duct tape of the Internet'), although lately some criticism of Perl programmers has surfaced. No doubt about one thing, though: CGI is just too old. Sensing a need, Sebastian Ridel created Mojolicious to fill CGI's place, satisfying Perl programmers' desire for a more modern web framework Yesterday Sebastian showed off some of Mojolicious' simplicity and power: By now you've probably heard about WebSockets, and that they are the future of web development, but so far there are very little examples that really show how easy to use they actually are. So today we are going to explore the wonderful world of events in Mojolicious a bit and build a little application that forwards all framework log messages to a browser window. The script is short and sweet and, if you still love Perl, will warm your HTML5 heart. Check it out here.
November 1, 2011
· 7,127 Views · 0 Likes

Refcards

Refcard #358

Salesforce Application Design

Salesforce Application Design

Trend Reports

Trend Report

Microservices and Containerization

According to our 2022 Microservices survey, 93% of our developer respondents work for an organization that runs microservices. This number is up from 74% when we asked this question in our 2021 Containers survey. With most organizations running microservices and leveraging containers, we no longer have to discuss the need to adopt these practices, but rather how to scale them to benefit organizations and development teams. So where do adoption and scaling practices of microservices and containers go from here? In DZone's 2022 Trend Report, Microservices and Containerization, our research and expert contributors dive into various cloud architecture practices, microservices orchestration techniques, security, and advice on design principles. The goal of this Trend Report is to explore the current state of microservices and containerized environments to help developers face the challenges of complex architectural patterns.

Microservices and Containerization

Trend Report

Low Code and No Code

As the adoption of no-code and low-code development solutions continues to grow, there comes many questions of its benefits, flexibility, and overall organizational role. Through the myriad of questions, there is one main theme in the benefit of its use: leveraging no-code and low-code practices for automation and speed to release.But what are the pain points that these solutions seek to address? What are the expected vs. realized benefits of adopting a no- or low-code solution? What are the current gaps that these solutions leave in development practices? This Trend Report provides expert perspectives to answer these questions. We present a historical perspective on no and low code, offer advice on how to migrate legacy applications to low code, dive into the challenges of securing no- and low-code environments, share insights into no- and low-code testing, discuss how low code is playing a major role in the democratization of software development, and more.

Low Code and No Code

Trend Report

Data Pipelines

Data is at the center of everything we do. As each day passes, more and more of it is collected. With that, there’s a need to improve how we accept, store, and interpret it. What role do data pipelines play in the software profession? How are data pipelines designed? What are some common data pipeline challenges? These are just a few of the questions we address in our research.In DZone’s 2022 Trend Report, "Data Pipelines: Ingestion, Warehousing, and Processing," we review the key components of a data pipeline, explore the differences between ETL, ELT, and reverse ETL, propose solutions to common data pipeline design challenges, dive into engineered decision intelligence, and provide an assessment on the best way to modernize testing with data synthesis. The goal of this Trend Report is to provide insights into and recommendations for the best ways to accept, store, and interpret data.

Data Pipelines

Trend Report

Enterprise Application Integration

As with most 2022 trends in the development world, discussions around integration focus on the same topic: speed. What are the common integration patterns and anti-patterns, and how do they help or hurt overall operational efficiency? The theme of speed is what we aim to cover in DZone’s 2022 "Enterprise Application Integration" Trend Report. Through our expert articles, we offer varying perspectives on cloud-based integrations vs. on-premise models, how organizational culture impacts successful API adoption, the different use cases for GraphQL vs. REST, and why the 2020s should now be considered the "Events decade." The goal of this Trend Report is to provide you with diverse perspectives on integration and allow you to decide which practices are best for your organization.

Enterprise Application Integration

Trend Report

DevOps

With the need for companies to deliver capabilities faster, it has become increasingly clear that DevOps is a practice that many enterprises must adopt (if they haven’t already). A strong CI/CD pipeline leads to a smoother release process, and a smoother release process decreases time to market.In DZone’s DevOps: CI/CD and Application Release Orchestration Trend Report, we provide insight into how CI/CD has revolutionized automated testing, offer advice on why an SRE is important to CI/CD, explore the differences between managed and self-hosted CI/CD, and much more. The goal of this Trend Report is to offer guidance to our global audience of DevOps Engineers, Automation Architects, and all those in between on how to best adopt DevOps practices to help scale the productivity of their teams.

DevOps

Trend Report

Enterprise AI

In recent years, artificial intelligence has become less of a buzzword and more of an adopted process across the enterprise. With that, there is a growing need to increase operational efficiency as customer demands arise. AI platforms have become increasingly more sophisticated, and there has become the need to establish guidelines and ownership.In DZone's 2022 Enterprise AI Trend Report, we explore MLOps, explainability, and how to select the best AI platform for your business. We also share a tutorial on how to create a machine learning service using Spring Boot, and how to deploy AI with an event-driven platform. The goal of this Trend Report is to better inform the developer audience on practical tools and design paradigms, new technologies, and the overall operational impact of AI within the business.This is a technology space that's constantly shifting and evolving. As part of our December 2022 re-launch, we've added new articles pertaining to knowledge graphs, a solutions directory for popular AI tools, and more.

Enterprise AI

Trend Report

Application Performance Management

As enterprise applications increasingly adopt distributed systems and cloud-based architectures, the complexity of application performance management (APM) has grown accordingly. To address this new set of challenges, traditional APM is making a push towards intelligent automation (AIOps), self-healing applications, and a convergence of ITOps and DevOps. DZone’s 2021 Application Performance Management Trend Report dives deeper into the management of application performance in distributed systems, including observability, intelligent monitoring, and rapid, automated remediation. It also provides an overview of how to choose an APM tool provider, common practices for self-healing, and how to manage pain points that distributed cloud-based architectures cause. Through research and thoughtfully curated articles, this Trend Report offers a current assessment of where real enterprises are in their journey to design APM approaches for modern architectures.

Application Performance Management

Trend Report

Kubernetes and the Enterprise

In DZone’s 2020 Kubernetes and the Enterprise Trend Report, we found that over 90% of respondents to our survey reported leveraging containerized applications in a production environment, nearly doubling since we asked the same question in 2018. As containerization approaches peak saturation, Kubernetes has also become an indispensable tool for enterprises managing large and complex, container-based architectures, with 77% of respondents reporting Kubernetes usage in their organizations. Building upon findings from previous years that indicate the technical maturity of containers and container orchestration, DZone’s 2021 Kubernetes and the Enterprise Trend Report will explore more closely the growing ecosystem and tooling, use cases, and advanced strategies for Kubernetes adoption in the enterprise.

Kubernetes and the Enterprise

Trend Report

Application Security

In the era of high-profile data breaches, rampant ransomware, and a constantly shifting government regulatory environment, development teams are increasingly taking on the responsibility of integrating security design and practices into all stages of the software development lifecycle (SDLC).In DZone’s 2021 Application Security Trend Report, readers will discover how the shift in security focus across the SDLC is impacting development teams — from addressing the most common threat agents and attack vectors to exploring the best practices and tools being employed to develop secure applications.

Application Security

Trend Report

Low-Code Development

Development speed, engineering capacity, and technical skills are among the most prevalent bottlenecks for teams tasked with modernizing legacy codebases and innovating new solutions. In response, an explosion of “low-code” solutions has promised to mitigate such challenges by abstracting software development to a high-level visual or scripting language used to build integrations, automate processes, construct UI, and more. While many tools aim to democratize development by reducing the required skills, others seek to enhance developer productivity by eliminating needs such as custom code for boilerplate app components. Over the last decade, the concept of low code has matured into a category of viable solutions that are expected to be incorporated within mainstream application development. In this Trend Report, DZone examines advances in the low-code space, including developers' perceptions of low-code solutions, various use cases and adoption trends, and strategies for successful integration of these tools into existing development processes.

Low-Code Development

Trend Report

CI/CD

In 2020, DevOps became more crucial than ever as companies moved to distributed work and accelerated their push toward cloud-native and hybrid infrastructures. In this Trend Report, we will examine what this acceleration looked like for development teams across the globe, and dive deeper into the latest DevOps practices that are advancing continuous integration, continuous delivery, and release automation.

CI/CD

Trend Report

Containers

With a mainstream shift toward cloud-native development, more organizations than ever are realizing real benefits as they modernize their architectures with containerized environments. While this move promises to accelerate application development, it also introduces a new set of challenges that occur with a fundamentally altered software delivery pipeline, ranging from security to complexity and scaling.In DZone's 2021 Containers Trend Report, we explore the current state of container adoption, uncover common pain points of adopting containers in a legacy environment, and explore modern solutions for building scalable, secure, stable, and performant containerized applications.

Containers

Trend Report

Modern Web Development

The web is evolving fast, and developers are quick to adopt new tools and technologies. DZone’s recent 2021 Modern Web Development survey served to help better understand how developers build successful web applications, with a focus on how decisions are made about where computation and storage should occur.This Trend Report will help readers examine the pros and cons of critical web development design choices, explore the latest development tools and technologies, and learn what it takes to build a modern, performant, and scalable web application. Readers will also find contributor insights written by DZone community members, who cover topics ranging from web performance optimization and testing to a comparison of JavaScript frameworks.Read on to learn more!

Modern Web Development

Trend Report

Kubernetes and the Enterprise

Want to know how the average Kubernetes user thinks? Wondering how modern infrastructure and application architectures interact? Interested in container orchestration trends? Look no further than DZone’s latest Trend Report, “Kubernetes and the Enterprise.” This report will explore key developments in myriad technical areas related to the omnipresent container management platform, plus expert contributor articles highlighting key research findings like scaling a microservices architecture, cluster management, deployment strategies, and much more!

Kubernetes and the Enterprise

Comments

The Greatest Software Development Books of All Time

Nov 19, 2020 · Milan Milanovic

Great list. Also consider The C Programming Language and probably Elements of Programming, although I haven't read all of the latter (even though it's short).

Eager Optimization Is The Enemy

May 21, 2016 · Sam Atkinson

Love it. But a thought on persuasion:

Amortize cost of speed decrease over projected application lifetime, discounted by uncertainty of application lifetime length and future runtime environment. Compare with cost of coupling tightness increase introduced by eager optimization, discounted by uncertainty of code inflexibility cost over time. Add extra weight to represent fraternal concern for future programmer trying to understand your code, on top of cash wasted on that programmer's struggling-through-your-hyper-optimized-code hours. SO many uncertainties about the future -- but at least right now I know I can make my code cleaner. People normally discount future utility with apparently ridiculous weighting on uncertainty, but often generational breaks ("what will this do to my grandchildren's world") override. Maybe the social argument -- the responsibility to future coding generations -- will sometimes be more persuasive than appeal to code cleanliness through itself? since it is fairly certain that other people will have trouble writing your eager-optimized code, while the relative cost of your brittle code versus your eagerly-performance-optimized code over time is much less certain.

Anyway I felt much worse when contacted by a poor puzzled programmer two years after I abandoned a codebase than when I wasted my own time trying to figure out what my undocumented epicycles were trying to do in the same blocks (of course it was a horrid optimization specific to the local network config).

The Real Reason You Shouldn't Use SIGKILL [Comic]

Apr 04, 2016 · Daniel Stori

:'(

Feeling guilty for every kill -9 from here until forever...

Always Start With Eager Initialization

Mar 30, 2016 · Sam Atkinson

Sure, that's what Sam pointed out in the article -- probably lazy won't help, but of course if it does (e.g. frequently opened db connection) then adjust accordingly.

As I understand James' image in relation to the article: the article proposes a rule of thumb that instantiates the general point Knuth is making -- and ∀-rules and thumb-rules both introduce constraints that reduce decision space and thereby stress etc.. 'Less thought required' isn't as good as 'without any extra thought required', but it's getting there...

So to flesh out the thumb: what are some other 'exceptional cases in which lazy initialization really does make sense'?

Close Your Database Connections! And Use Connection Pools While You're at It

Mar 30, 2016 · Duncan Brown

Ha, good idea, thanks! Mild embarassment in one's own eyes can indeed be a great motivator to better code...

Close Your Database Connections! And Use Connection Pools While You're at It

Mar 29, 2016 · Duncan Brown

Ha. Would say "no duh" except that I can't count the times I've smacked myself in the head for leaving database connections open.

There should be a "Checklist for Bad Things You Knew About Ten Years Ago But Still Do Anyway".

Why You Should Use Git Over TFVC in TFS

Mar 25, 2016 · Matthew Casperson

That's a good point re. mixing tools -- I could say that "in fact people do sometimes choose between JIRA and GitHub for issue tracking because GitHub Issues isn't too shabby so if you're using GitHub maybe you should just stick with the built-in feature and not worry about another tool?" but that lumps things together in a general conversation (the article) that only happen to be lumped together in a particular situation (my choice point).(On the other hand, my Unix side wonders whether the size of many tools' feature-sets and the resulting feature overlaps among tools -- the things that make these heterogeneous option sets available in the first place -- aren't just mistakes inherited from factory-centric organizational design principles.)

Thanks for the feedback. Title changed.

Why You Should Use Git Over TFVC in TFS

Mar 24, 2016 · Matthew Casperson

That's true (also @Csaba), but you might have to choose between option (a) which includes TFS and not Git (even though TFS does support Git -- say for solution-integration reasons, like "we're going to use TFS only with TFVC") and option (b) which includes Git and whatever other ALM stuff. So I think the title seems a little apples-to-oranges but sometimes in actual moments of choice -- because in-prod technology selections are not atomic -- you do have to choose between apples, on the one hand, and oranges, on the other.

10 Classic Books Every Serious Programmer Should Read

Feb 16, 2016 · Deepak Karanth

For something a little different, what do you think of SICP or Bryant and O'Hallaron? SICP helped me bridge abstract compsci coursework with actually writing code, and Bryan and O'Hallaron (which I admit I haven't read fully) helped me decide how much to trust compiler guesses and runtime abstraction (for me this meant .NET).

I've only read about half of your list and now have loads of great-looking reading material -- thanks for this post. :)

The Evolution of Linux Containers and Their Future

Jan 29, 2016 · Imesh Gunaratne

Nice, thanks -- and wow, what a cool wiki page!

VMWare's deep-dive into how they virtualized x86 is also a beautiful read.

Make the Magic go away.

Aug 13, 2015 · Jane Berry

How important is the abstract compsci stuff, though? -- the part that kind of is a little magical, that makes it possible for putting-things-in-little-boxes-and-shifting-them-around to generate valid inferences, traverse graphs, calculate values of functions with arbitrary limits, pixelize projective geometry, do a massive linear regression in milliseconds?

Make the Magic go away.

Aug 13, 2015 · Jane Berry

Ha! Great article. Sort of the converse of Fred Brook's No Silver Bullet piece. Sadly I haven't done enough assembly to feel the magic of the JVM go away. But I see what Uncle Bob is getting at, and far too often I do hit 'wait a minute, that was incredibly simple..dammit' moments..that aren't revelations but just annoyances at my earlier (and damaging) magic-feeling.

10 Essential & Useful Ruby on Rails 4 Gems

Aug 12, 2015 · Elaine Harris

Ultimately this kind of info would be more useful in a structured catalogue, but rubygems.org doesn't provide use-case recommendations. Are there any structured directories of gems that let you, for example, list all ORM gems?

How to persist LocalDate and LocalDateTime with JPA

Aug 12, 2015 · Thorben Janssen

Hey, thanks -- that BLOBbing really sucks and this looks pretty easy.

Java Interview Questions On main() Method

Aug 09, 2015 · Instanceof java

This seems extremely basic..maybe easier just to link to Oracle's Java Tutorials?

Java 7 Quietly Changed the Structure of String

Jul 10, 2015 · mitchp

Interesting! Have C++ and/or C# changed strings over time to such a significant degree?

CASE Function in SQL Server 2005

Jul 01, 2014 · fordevs devs

Impressive, thoughtful article -- thanks for the post.

As a matter of empirical fact, I guess, any code change de facto increases the chance of random bug popup. There's been some empirical work on the correlation of refactorings with bug reports, and overall it looks like refactorings and bug reports correlate positively. But maybe most of those generic refactorings were stupid or at least 'not best'. Figuring out which refactorings are 'best' is exactly what will help us get past the generic and into the actionable.

More granular empirical work on the cost/benefit of refactoring seems to have ballooned a bit over the past two years. I've only skimmed a few of these, but you might find some of these articles interesting.

Call the expert: Adding subdomain requirements to routing.yml

Feb 23, 2014 · Stefan Koopmanschap

@Jaime, good point re. boilerplate vs. patterns. I think we also sometimes apply the concept of a pattern too broadly even when we don't just insert some boilerplate implementation. For example, you might still use produce too many objects using a pointless Factory even if you don't implement a Factory with boilerplate code.

@Matthias, I don't know the solution to the problem you observe. I've 'thought ahead too far' way too often, resulting in bloated and (surprisingly often) brittle code. In most cases this has probably been smart-aleckiness -- going beyond behavioral specs, thinking how the users might (but have no real plans to) use the software. Is it enough to just say 'stick to requirements' and that's it? I'm tempted to say no, because the developer often knows better than anyone else what the program is capable of. Then the choice is just between strategic management (requirements) vs. entrepreneurial (what this could possibly do) decision-making. And sometimes, especially when it comes to technology, entrepreneurial thinking really does work better.

But to the practical issue -- how do you decide whether a given piece of code is going to be reused often?

@Raging makes another good point -- I don't really have a taxonomy of pattern misuse in mind. But I'd like to build one. :) At least, I feel like it would sometimes help me avoid bloat, and maybe others too. Maybe we could even build a nicely articulated ontology, or a richer (more structured) pattern language...

@Serguei has also touched on another interesting idea. Certainly it's wrong to think of patterns as the 'correct' way. That's just student & CYA thinking -- just trying not to look like you've done something the wrong way. It has no place in any kind of craftsmanship, or business for that matter. But perhaps patterns' educational role suggests a third benefit, in addition to modularity and DRY. Because patterns structure code in commonly-accepted ways, use of patterns can help others understand your work. More articulated, 'thin' patterns might do this even more effectively -- say, especially if we develop naming conventions that clearly communicate their place in the hierarchy (as specializations of broader design patterns). Or maybe I'm on the wrong track..?

Call the expert: Adding subdomain requirements to routing.yml

Feb 23, 2014 · Stefan Koopmanschap

@Jaime, good point re. boilerplate vs. patterns. I think we also sometimes apply the concept of a pattern too broadly even when we don't just insert some boilerplate implementation. For example, you might still use produce too many objects using a pointless Factory even if you don't implement a Factory with boilerplate code.

@Matthias, I don't know the solution to the problem you observe. I've 'thought ahead too far' way too often, resulting in bloated and (surprisingly often) brittle code. In most cases this has probably been smart-aleckiness -- going beyond behavioral specs, thinking how the users might (but have no real plans to) use the software. Is it enough to just say 'stick to requirements' and that's it? I'm tempted to say no, because the developer often knows better than anyone else what the program is capable of. Then the choice is just between strategic management (requirements) vs. entrepreneurial (what this could possibly do) decision-making. And sometimes, especially when it comes to technology, entrepreneurial thinking really does work better.

But to the practical issue -- how do you decide whether a given piece of code is going to be reused often?

@Raging makes another good point -- I don't really have a taxonomy of pattern misuse in mind. But I'd like to build one. :) At least, I feel like it would sometimes help me avoid bloat, and maybe others too. Maybe we could even build a nicely articulated ontology, or a richer (more structured) pattern language...

@Serguei has also touched on another interesting idea. Certainly it's wrong to think of patterns as the 'correct' way. That's just student & CYA thinking -- just trying not to look like you've done something the wrong way. It has no place in any kind of craftsmanship, or business for that matter. But perhaps patterns' educational role suggests a third benefit, in addition to modularity and DRY. Because patterns structure code in commonly-accepted ways, use of patterns can help others understand your work. More articulated, 'thin' patterns might do this even more effectively -- say, especially if we develop naming conventions that clearly communicate their place in the hierarchy (as specializations of broader design patterns). Or maybe I'm on the wrong track..?

Call the expert: Adding subdomain requirements to routing.yml

Feb 23, 2014 · Stefan Koopmanschap

@Jaime, good point re. boilerplate vs. patterns. I think we also sometimes apply the concept of a pattern too broadly even when we don't just insert some boilerplate implementation. For example, you might still use produce too many objects using a pointless Factory even if you don't implement a Factory with boilerplate code.

@Matthias, I don't know the solution to the problem you observe. I've 'thought ahead too far' way too often, resulting in bloated and (surprisingly often) brittle code. In most cases this has probably been smart-aleckiness -- going beyond behavioral specs, thinking how the users might (but have no real plans to) use the software. Is it enough to just say 'stick to requirements' and that's it? I'm tempted to say no, because the developer often knows better than anyone else what the program is capable of. Then the choice is just between strategic management (requirements) vs. entrepreneurial (what this could possibly do) decision-making. And sometimes, especially when it comes to technology, entrepreneurial thinking really does work better.

But to the practical issue -- how do you decide whether a given piece of code is going to be reused often?

@Raging makes another good point -- I don't really have a taxonomy of pattern misuse in mind. But I'd like to build one. :) At least, I feel like it would sometimes help me avoid bloat, and maybe others too. Maybe we could even build a nicely articulated ontology, or a richer (more structured) pattern language...

@Serguei has also touched on another interesting idea. Certainly it's wrong to think of patterns as the 'correct' way. That's just student & CYA thinking -- just trying not to look like you've done something the wrong way. It has no place in any kind of craftsmanship, or business for that matter. But perhaps patterns' educational role suggests a third benefit, in addition to modularity and DRY. Because patterns structure code in commonly-accepted ways, use of patterns can help others understand your work. More articulated, 'thin' patterns might do this even more effectively -- say, especially if we develop naming conventions that clearly communicate their place in the hierarchy (as specializations of broader design patterns). Or maybe I'm on the wrong track..?

Call the expert: Adding subdomain requirements to routing.yml

Feb 23, 2014 · Stefan Koopmanschap

@Jaime, good point re. boilerplate vs. patterns. I think we also sometimes apply the concept of a pattern too broadly even when we don't just insert some boilerplate implementation. For example, you might still use produce too many objects using a pointless Factory even if you don't implement a Factory with boilerplate code.

@Matthias, I don't know the solution to the problem you observe. I've 'thought ahead too far' way too often, resulting in bloated and (surprisingly often) brittle code. In most cases this has probably been smart-aleckiness -- going beyond behavioral specs, thinking how the users might (but have no real plans to) use the software. Is it enough to just say 'stick to requirements' and that's it? I'm tempted to say no, because the developer often knows better than anyone else what the program is capable of. Then the choice is just between strategic management (requirements) vs. entrepreneurial (what this could possibly do) decision-making. And sometimes, especially when it comes to technology, entrepreneurial thinking really does work better.

But to the practical issue -- how do you decide whether a given piece of code is going to be reused often?

@Raging makes another good point -- I don't really have a taxonomy of pattern misuse in mind. But I'd like to build one. :) At least, I feel like it would sometimes help me avoid bloat, and maybe others too. Maybe we could even build a nicely articulated ontology, or a richer (more structured) pattern language...

@Serguei has also touched on another interesting idea. Certainly it's wrong to think of patterns as the 'correct' way. That's just student & CYA thinking -- just trying not to look like you've done something the wrong way. It has no place in any kind of craftsmanship, or business for that matter. But perhaps patterns' educational role suggests a third benefit, in addition to modularity and DRY. Because patterns structure code in commonly-accepted ways, use of patterns can help others understand your work. More articulated, 'thin' patterns might do this even more effectively -- say, especially if we develop naming conventions that clearly communicate their place in the hierarchy (as specializations of broader design patterns). Or maybe I'm on the wrong track..?

Google’s OpenSocial API & Facebook’s SocialAds

Feb 29, 2012 · briankel

Here's a brief intro to the basics of F#: http://www.developerfusion.com/article/122079/intro-to-f/
Major and Minor JavaScript Pitfalls and ECMAScript 6

Feb 24, 2012 · cjsmith

Well said, and maybe Google really is the only ally strong enough to break JavaScript's iron monopoly; and maybe Dart really is that good (I don't know, never used it seriously). But some web devs have started to grow afraid of a WebKit monopoly lately, too -- which might prove a better-orchestrated monopoly than JavaScript, in the short term, but probably won't encourage organic development of standards rapidly applied to emerging use-cases (if all monopolies lumber; as Google is beginning to do, apart from WebKit at least). Any thoughts on the (perceived?) WebKit-will-become-IE6 danger?
tf-net - Topology Framework .NET

Feb 14, 2012 · Mr B Loid

Cool, thanks for all your feedback! We're working on WebSockets! :)
Gears Future APIs: Desktop Shortcut API

Feb 03, 2012 · Mr B Loid

Thanks! Yes, that would be awesome -- like a window onto another century. Brainstorming: to do it properly client-side, without rigid viewing sites, you'd probably need an actual 3d engine..which would look pretty lousy on current mobile tech. But you could also pre-render or pre-record video, and stream server-side using an appropriate web service, called with the geolocation as a parameter. And you could probably handle the bandwidth with serious wifi at the historical site...
How They Did It: Command and Conquer in HTML5 Canvas

Jan 17, 2012 · John Esposito

Certainly; so the JavaScript is the interesting part. In a sense, though, because Canvas helps make the browser a platform, writing a game in JavaScript and outputting to a Canvas is closely connected to the overall purpose of many new features of HTML5, and emerging APIs not contained in the actual HTML spec.
What separates good code from great code?

Jan 11, 2012 · $$ANON_USER$$

This is great; thanks for posting.
Adding fractional time to a Calendar

Dec 29, 2011 · Paul Davis

This might help: http://stackoverflow.com/questions/6178332/force-decimal-point-instead-of-comma-in-html5-number-input-client-side
Adding fractional time to a Calendar

Dec 29, 2011 · Paul Davis

This might help: http://stackoverflow.com/questions/6178332/force-decimal-point-instead-of-comma-in-html5-number-input-client-side
Adding fractional time to a Calendar

Dec 29, 2011 · Paul Davis

This might help: http://stackoverflow.com/questions/6178332/force-decimal-point-instead-of-comma-in-html5-number-input-client-side
Famous Logos in Pure CSS3

Dec 27, 2011 · John Esposito

Aha -- just to be clear, I didn't create those logos -- just noticed them, thought they were awesome, and poked around the code a little. ecsspert.com created all of them. Sorry if that wasn't clear from my post!
Lambdas and Closures and Currying. Oh my! (Part 5)

Dec 27, 2011 · Mr B Loid

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

Lambdas and Closures and Currying. Oh my! (Part 5)

Dec 27, 2011 · Mr B Loid

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

Lambdas and Closures and Currying. Oh my! (Part 5)

Dec 27, 2011 · Mr B Loid

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

Lambdas and Closures and Currying. Oh my! (Part 5)

Dec 27, 2011 · Mr B Loid

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

HTML5 Canvas + WebSockets = Multiplayer Space Shooter In Browser

Dec 27, 2011 · John Esposito

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

HTML5 Canvas + WebSockets = Multiplayer Space Shooter In Browser

Dec 27, 2011 · John Esposito

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

HTML5 Canvas + WebSockets = Multiplayer Space Shooter In Browser

Dec 27, 2011 · John Esposito

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

HTML5 Canvas + WebSockets = Multiplayer Space Shooter In Browser

Dec 27, 2011 · John Esposito

Sweet game! I'm a sucker for laser glows. Didn't see anyone else in there when I tried it -- but, does the endless galaxy make it harder for individual users to find one another (esp. without landmarks because you're in space)?

What you're saying about the problems with HTML5 game development sound spot-on -- a lot like what EA and Zynga people were saying at the HTML5 game conference a couple of months ago.

Given HTML5's messiness, what made you want to learn it and build a game in HTML5 (besides awesome curiosity)? is it just the interoperability promise?

Architecting For Performance And Scalability - Panel Discussion @ QCon

Dec 27, 2011 · Srini Penchikala

Creating a table with a facebook_id field and an email_address field sounds like it would give you the lookup you need..but maybe I'm not understanding your question. Are you currently using user email addresses to identify users, once they go through the checkout process (since you said that users don't actually login..I'm guessing you ask for email during checkout)?

The Wikipedia page on OpenID is pretty good, I think.

Architecting For Performance And Scalability - Panel Discussion @ QCon

Dec 27, 2011 · Srini Penchikala

Creating a table with a facebook_id field and an email_address field sounds like it would give you the lookup you need..but maybe I'm not understanding your question. Are you currently using user email addresses to identify users, once they go through the checkout process (since you said that users don't actually login..I'm guessing you ask for email during checkout)?

The Wikipedia page on OpenID is pretty good, I think.

Architecting For Performance And Scalability - Panel Discussion @ QCon

Dec 27, 2011 · Srini Penchikala

Creating a table with a facebook_id field and an email_address field sounds like it would give you the lookup you need..but maybe I'm not understanding your question. Are you currently using user email addresses to identify users, once they go through the checkout process (since you said that users don't actually login..I'm guessing you ask for email during checkout)?

The Wikipedia page on OpenID is pretty good, I think.

Architecting For Performance And Scalability - Panel Discussion @ QCon

Dec 27, 2011 · Srini Penchikala

Creating a table with a facebook_id field and an email_address field sounds like it would give you the lookup you need..but maybe I'm not understanding your question. Are you currently using user email addresses to identify users, once they go through the checkout process (since you said that users don't actually login..I'm guessing you ask for email during checkout)?

The Wikipedia page on OpenID is pretty good, I think.

Test Driven Development: Does writing software backwards really improve quality?

Dec 22, 2011 · Mr B Loid

You can get the latest IE10 developer preview here, although you'll also need the Windows 8 Developer Preview for that.

MSFT's full IE10 guide for developers is here.

For specific feature support, Try caniuse.com's IE9 vs. IE10 comparison here -- then play with what features and browsers you want to compare.

For a more discursive analysis: Sencha wrote a nice Win8/IE10 first look article here (emphasis on HTML5).

Generics Memory Usage

Dec 16, 2011 · Tony Thomas

Interesting. Why do you think large corps and gov depts account for most of the remaining IE6 installs?

I'm not doubting, just wondering how we would know (or conjecture).

Generics Memory Usage

Dec 16, 2011 · Tony Thomas

Interesting. Why do you think large corps and gov depts account for most of the remaining IE6 installs?

I'm not doubting, just wondering how we would know (or conjecture).

Generics Memory Usage

Dec 16, 2011 · Tony Thomas

Interesting. Why do you think large corps and gov depts account for most of the remaining IE6 installs?

I'm not doubting, just wondering how we would know (or conjecture).

Generics Memory Usage

Dec 16, 2011 · Tony Thomas

Interesting. Why do you think large corps and gov depts account for most of the remaining IE6 installs?

I'm not doubting, just wondering how we would know (or conjecture).

Web Payments as a Web Standard

Dec 16, 2011 · John Esposito

Well, I have nothing to do with PaySwarm. But Manu Sporny, who wrote the vocabulary, does. I just haven't seen any other serious attempts at a micropayments vocabulary. If you know of one, though (or see something wrong with PaySwarm's), then I'd love to know about it!
Bitstream Vera Sans Mono

Dec 13, 2011 · Mr B Loid

Alex Russell posted a follow-up: wherever HTML5 goes (or should go) in the future, as of now, vendor prefixes have been a rousing success.
Will PHP Become a Niche Language?

Dec 10, 2011 · John Esposito

Facebook just posted about their HipHop virtual machine: https://www.facebook.com/notes/facebook-engineering/the-hiphop-virtual-machine/10150415177928920
Will PHP Become a Niche Language?

Dec 10, 2011 · John Esposito

Biggest datum: TIOBE says that Python usage grew faster in 2010 than any other language (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) -- but it's true that this doesn't support the claim that PHP is shrinking (and TIOBE doesn't say that it is). Apples and oranges, strictly speaking, though AppEngine complicates the comparison. langpop.com is the other major site I look at. But my feel for language popularity is also filtered through reddit programming -- which apparently loves Python -- and Google Trends (fwiw). None of which is remotely conclusive, of course, and I'm not really happy with any of these methods of measuring language popularity..but I can't think of anything better. Any suggestions? Would be much appreciated!
Will PHP Become a Niche Language?

Dec 10, 2011 · John Esposito

Good points -- I was definitely too vague in there. Here's what I was thinking (but correct me if I'm wrong): 'server-side web apps' is kind of large for 'niche', so perhaps in the future PHP will no longer be used for those kinds of web apps that other languages (Ruby, Python) handle better -- in which case PHP will still be used just for what it's good at (http://www.sitepoint.com/a-pro-php-rant/) -- like COBOL, which is why I found Watts' comparison interesting. It's hard to say that PHP is best-suited to 'less serious' apps, though, when Facebook is using PHP quite a lot...
Bill Gates to return as Microsoft's white knight

Dec 09, 2011 · Denzel D.

That sounds intuitively right, but maybe Kinect and the Nokia partnership will make up for the gap -- maybe more if Windows 8 makes multi-device development that much easier..? Some good discussion here: http://www.zdnet.com/tb/1-110356?tag=talkback-river;1_110356_2248176#1_110356_2248176
CSS3 and the death of Handheld Stylesheets

Dec 02, 2011 · Mr B Loid

http://www.lawsofform.org/ideas.html
Don't Use MongoDB

Nov 07, 2011 · admin

All a hoax, though? http://www.h-online.com/open/news/item/MongoDB-FUD-or-Hoax-controversy-spreads-online-1374710.html
Oracle APEX Builder Plugin v1.7 release!

Oct 31, 2011 · Patrick Wolf

I really like Steve McConnell's Code Complete: strong claims, clearly stated, presented with a practical eye and plenty of nice graphical summaries.
Using Bitwise XOR to exchange variable values in ActionScript

Oct 21, 2011 · Gerd Storm

Maybe: abstraction->efficiency(->freetime), but abstraction->usability(->freetime) only when really well (not just verbosely) documented.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: