Many companies we speak to complain that their website does not really add value to the business, or that its not meeting the objectives they had originally envisaged. They get persuaded that its now out of date, and needs a revamp. In fact if you analyse the most successful websites they hardly ever get revamped (google, twitter, facebook, ebay), sure they add functionality, but the fundmental look and feel is straight forward, uncomplicated, and constant.
Many branding agencies and web designers focus on delivering a site which the client likes (i.e. looks great), but theres a whole lot more that they could be doing in the field of educating clients about maximising the return from the website, which after all is the purpose of the website.
The argument against doing a business plan is: its a brochure website we don't need to go to all that effort, or we are not an e-commerce site, or how can we predict the numbers of visitors to plug into the plan. Well the answer is that you can and should develop a plan, it need not be complicated or long winded but it should set out the potential marketsize, forecast and initial market share and have targets for attracting visitors to the site and turning those visitors into clients. You can then develop targets for returning customers, financial performance etc. Cube have developed a methodology to maximise the return from websites - Traffic, Trust and Transactions ™
The first step is to do research, to answer the following questions
There are tools on the web, many of them free which provide accurate information on the exact keywords/phrases people are searching for, the number of searches for those phrases on a month by month basis, and where those people are located even down to the district in the town.
So now we know the web based marketsize on a month by month basis for our products. How do we determine the people who are browsing from those people who are ready to buy? Typically the person who is ready to buy uses a 4+ word phrase (so browsers use a single word, comparers use a two-three word search).
We use those phrases to analyse who the competition is, their strengths and weaknesses.
The website platform and the design need to have all the features you need to showcase your products/offering, and convert visitors into customers or persuade them to execute a call to action. It's all about engagement. The design needs to be clear and uncluttered (clean design), responsive (presentation changes according to the device being used), and contain the most effective marketing and engagement tools so you can engage your potential and existing customers to buy your products or offerings. The more you know about your visitors the better chance you have of enticing them to buy. So visitor and customer intelligence tools is key to success. Many businesses spend a huge amount of time on ineffective social media; our website platforms feature tools which enable you to make the most of social media with the minimum amount of effort. Cube Creative have tuned platforms for Wordpress, Joomla and Magento.
The research undertaken in Step 1 enables the writing of effective landing pages to target those phrases searched for in our target geographies. We are not going to expand on what makes an effective landing page here, but suffice to say it's an evolving art.
Using tools such as google analytics and statcounter, it's possible to see the number of people who visit the site, how they got there (what keywords were searched on what search engine, twitter, facebook ), their geographic location and if its a corporate user the company they work for. The pages they viewed on our website, how long they stayed on the site, the jump off point, and the number of times they have returned to the site. There's a whole load more to performance analysis, but we are going to save that for our clients.
We now know the market size, our share, the number of conversions from visitors to clients. We can now set targets and implement strategies both to increase our share of the search and conversions, develop new products that people are actively looking for, or move into new geographies because we know there is a need.
The information contained in this article is only really a introduction to what should be done and what is possible, if you'd like to know more contact us
Shared hosting packages are available on the internet for approximately €50 per annum, unfortunately many resellers are selling these for €400 per annum or even more
Shared hosting is specifically designed to provide a simple platform which will work for most common website frameworks, it also means that more than 150 websites can be sharing the same IP address - but why is this important ?
Search engines (Google et al) constantly try to ensure that their search results provide what the user is really looking for in the top search results, they use "signals" to determine quality, so those web pages with the highest quality score rank highly.
If you have taken the time and effort to write properly structured content, which is easy to read, grammatically correct and interesting content that in itself will increase your score, the more high quality websites (those that rank well on search engines) that refer to (link) your web page, again that will signal well on search engines.
However, a slowly responding web page will undo all that good, after all these days who wants to wait around for a slow page to load, search engines like the citizens of the internet has a short attention span, so consistently slow pages equals a poor rank. The Answer?, Pay for hosting optimised for your website, and make sure that your getting what you think you are paying for
Scaleability is the ability of an application to satisfactorily service and respond to the number of users/requests required. The variable being the number of users or requests, and the "satisfactorily service" being the subjective. Typically, the scaleability of an application is defined by the choices made by the original designers and developers of the application, and the constraints given them by the "customer".
Twitter for example, was not designed from the ground up to be capable of supporting millions of users, and therefore they have had scaleability and reliability issues. Its not their fault, they became victims of the success of the project; having said that its not a trivial or cheap exercise to recover from a scaleability issue (or poorly written) application and come out with your brand reputation intact - so you have to hand it to those guys.
Best then to get it right from the outset (but easier said than done)
We don't want to get paralysed by masticating constantly over the design. But its much easier to fix at the design stage than in production. So we need to do some planning from the outset
Rule 1 - Get the scaleablity parameters from the business plan and keep the client informed of the limitations of the design from the get go
The client should have built a business plan for the project; if they haven't it should be ringing your alarm bells. The business plan should dicate the scaleability requirements for the design. The plan should give you sufficient information to enable you to determine the platform you should be using for the application.
Rule 2 - Challenge whether Ruby on Rails is the most appropriate platform for this application
Cube will be writing on the limitations of Ruby on Rails v2.x in a further blog.
Rule 3 - Its horses for courses folks! waterfall for big objectives, agile for small utility type applications.
If the application is sizeable, and/or involves a medium to large population base, then you're going to need to use at least some of the waterfall techniques for design, sure you can mix in some agile techniques where appropriate to keep things moving. But for big projects, pure agile development is going to lead you into a cul-de-sac in an articulated lorry, and it ain't going to be easy to escape from that one!!!!
Rule 4 - Map out the objectives and make absolutely sure you understand the requirements for the application
So map out the objectives, and goals for the application in the written form, you'll need to speak to all the stakeholders via interviews, group sessions if need be. If you can write it down easily, then you have put sufficient thought into the application, if its hard to write down, you need further thought.
Rule 5 - If you can't write it down easily, you haven't put in sufficient thought
The map is going to form the basis of the specification of the application. Once you've completed and reviewed the map, the application architect takes on the mantle. The architect will map and agree with the client the actor/role/process/artifacts, and crucially the interaction between them. You should then be in a position to develop and agree the views. This is best mocked up in the pictorial form (whichever suits the team best), you then need to review the pictorial views against the actor/role/process/artifact model. This technique is designed to ensure that the views will be useable in the real world, it also provides good information as to what data/methods will be required for the controller, and model design.
Shoulda been called the VCM model in Rails, since the model and view never directly interact. Below is my interpretion of the rails architecture.
Rule 6 - Stick with the rails rules for Views, Models, and Controllers from the outset
The controller is like the traffic cop marshalling requests, it takes requests from the view, parses them, handles sessions, cookies, submits and requests data from the model, it also serves to provide security for the application. It should be mean and lean. if its not then you need to rethink, and refactor.
Models validate, store and retrieve data from the database, and deal with the business logic. Its where all the hard work is done, in the traffic analogue its the articulated lorry - it does all the heavy lifting and transport.
I can understand that sometimes you want to use for a stored procedure to make the database server do the work, since it allows the logic to be split into an 'n' tier architecture. I'd resist that until i was absolutely sure that there was no other way.
The next exercise is to perform the object to relational mapping, and database design - its absolutely crucual you get this right - its very difficult to play around with database design once you've a couple of million rows in place, and some initially happy and expectant customers.
Rule 7 - Use the Rails conventions for Object, Table, and Relationship mapping
Unless you are porting a legacy database and have no other choice, stick with the Rails conventions in table, and attribute naming. I've spent plenty of time regretting some early decisions on thinking I was right, and they were wrong.
Rule 8 - Perform a sensibility check on the design
Finally, perform a sensibility check of your design against the actor/role/process/artifact model just to make sure that you haven't missed anything. If you can't make this diagram look clean, and readable, then its likely that the design needs more work.
Ruby is an object oriented interpreted language, developed in 1995 by Yukihiro “matz” Matsumoto. Ruby is an extremely elegant language, which allows the programmer an immense degree of freedom. That freedom though, comes at a price. As an interpreted language, Ruby is not the greyhound of the language world (1.9.0 of ruby vs C++ vs Java - 89.3, 1.6).But at least, with the advent of Ruby 1.9, we have the ability to take advantage of multiple OS threads, in versions previous to 1.9 we could only take advantage of a single OS thread.
The major limitation is the Global Implementation Lock (GIL), this prevents more than one OS thread running at a time. So what does this mean ?, well we can't take advantage of multiple CPU cores and we have an IO blocking issue. The reason, is the lack of certainty that the application is thread safe.
So Whats thread safety, and why has it not been implemented before? Threads share the same memory address space, so you can write the same variable multiple times in multiple threads, so which one is the write(sic) one, and how do you implement thread safety. The answer is to lock the function/method right at the start, and drop the lock at the end. The problem is that its expensive in resource term, 'cos you're blocking on the function. A more refined approach is to put locks around the write to the variable, this is less expensive, but more complicated and they ain't easy to debug. A simpler option is to make variables write once. This is the approach adopted in the dataflow gem Thead safety is only important in parallelism, so should you bother? If you need an application to scale, then a simple method is to add more hardware, but that only works if you have built-in the tools to make best use of the additional hardware.
the majority of system architects took the fork and exec daemon approach Application Partitioning Lets assume that we have an application in which clients submit a form periodically, administrators perform administration, analysts analyse, and managers generate MIS reports. Its possible to partition the application, so that different instances of the application handle each community, we may even adopt a different strategy for each community.
Applications some times require large blocks of code and complex database calls which block other simpler operations whilst executing. One solution is to use message queuing. Basically messages are passed to a queue, at the end of the queue is a background task execution server. It pops messages off the queue and executes the task , passing results back off the messaging queue. Its possible to perform asynchronous operations within a page using this technique. A significant number of message queuing components are available from Apaches ActiveMQ using the Stomp protocol,
though the database is going to become the blocking factor, even with pooling connections. Rails inherently at least as of 2.2 does not handle different database connections concurrently, and thats where you need to think about alternative approaches such as message queuing, but that involves making significant changes to your code. If your doing this at the design stage, then great, but make sure that the messaging engine you are going to use is going to be appropriate. Data Partitioning Lets say you've got a huge database, and that its become or will become the blocking factor, then you need to split the databases up. You use a single or cluster of database instances as an index server. You perform a lookup on the record you require from the index server, and then access the record from the appropriate database instance. Currently, this technique is beyond rails, so you'll need to perform some fancy footwork to get this going robustly. The De-coupled approach An alternative approach to thread safety, is to use a reinvention of the fork and exec daemon approach, by using the http server to handle and distribute incoming requests and generating multiple processes to handle them. The basic concept is to use apache with mod-cluster, to generate several mongrel servers. The principle is shown below. Its also possible to do this with several other webservers. An alternative is to use IBMs webserver with JRuby
The recent wordpress security alert advised users to upgrade to the latest version of wordpress, and published a list of those plugins that wordpress thought vunerable. Users are advised to upgrade as soon as possible. In this particular case, keeping your plugins and wordpress up to date would not have protected you from infection. The core issue was a poorly documented subroutine/api which led developers to believe that they did not need to clean the parameters in the url (which can be used by hackers to execute commands on the webserver). The exploit can be used to leave behind nasty backdoors and malware which they can activate at their leisure - so the infection won't necesarily be caught by a malware checker.
We also found that many modifications to plugins and themes performed by web developers were also vulnerable to the security issue. Simply upgrading to the latest version of wordpress and plugins won't necessarily help you if your site has already been infected. So how do you know if it has and what can you do about it. The only real way of protecting yourself is to reinstall wordpress and all your plugins from scratch, install your theme (after checking it for infection), and import your data (after checking it for compromised comments etc). Thats going to be a long and expensive process - particularly if you have a big site.
We upgraded wordpress and plugins in a timely fashion on a shared VPS with about 10 wordpress sites on it, and then went looking at modifications to plugins done by others and themes. We found that even after the upgrade the sites were compromised and in our particular case it was a PHP injection attack. We cleansed it by searching every php file for the pattern of the attack and removing the offending code, we also scheduled a search for the pattern every hour so we can see if we are still vulnerable. We then went looking on our dedicated VPS's and found exactly the same issues. We then had to reinstall wordpress from scratch on every site just to be sure that no trace of the infection was left behind.
The pattern we found was a PHP injection attack, that may not be the same for everyone, and it looks like it should be there. Look for "Speedup php function cache" in all php files the function uses base64 to inject whatever code the hacker likes into your php files.
A Strong message to all developers always always escape input/uri from whatever source (<input>, GET, Server variables, URI's) even if you think its safe and been escaped before.
An important factor in the success or failure of your website is the time it takes to load the page. Google likes your page to load in under 2 seconds - just putting up a page with good quality content that loads quickly will work miracles for your ranking - some website owners load a page during development on their website, and it appears incredibly quick - job done! well sadly thats not always the case. Modern browsers load images and files into local temporary files when you first load a page, and use those locally held files (termed cache) when the page is requested again. A first time visitor though, has to load those files from your webserver transfer them to their local machine, and then render the page (that's the process of displaying images and formatting text). We have seen webpages which take 35 seconds to load for a first time visitor, but 1.7 seconds to load for a repeat visit. Research has shown that slow web pages (and by that we mean pages that take more than 4 seconds to load), lose 50% of visitors to rivals.
Really fast pages make an impact before even the visitor has read the content, they convey a consious and subconsious message about your companies approach to quality, customer service, and the importance of online services within your business. Search engines want to convey a quality experience for their customers; their search algorithms avoid awarding a top ranking for slow pages as page speed is an important signal that the page has the potential to contain quality content.
If your web page is slow; what message does that convey to potential customers (and search engines) about your company. Really fast pages make an impact before even the visitor has read the content. So speed can be an important indicator of a company's approach to quality, customer service, and online services.
Modern browsers are incredibly tolerant to errors, so whilst a page may look great; it might not be written very well. It takes browsers time when there are errors on the page to work out how best to render a page. In addition, when your page is visited by search engines, they analyse the page to see if it contains errors, it's one of the quality indicators (or signals in SEO parlance) they use to help their ranking algorithm determine your quality score.
The larger an image the longer it takes to transfer across the internet right? yes that's true, but just because an image looks small on the screen does not mean that the actual image file is not huge. We have seen a website with a logo file which on the screen measured 100 pixels wide, but was 10,000 pixels wide in the image file - the html code tells the browser to resize the image. We typically find that the majority of images on a website have not been optimised, and this can seriously affect page load time. Search engines can calculate the size an image is displayed on the screen, and the size the image file actually is. If the two differ widely, it marks down the quality score for that page. In theory you should have a different image file for each device size (i.e. mobile, desktop, tablet).
Computers ignore whitespace (spaces, newlines, tabs) in programming code. Programmers and software engineers prefer nicely formatted text, which makes it easier to read. The additional overhead caused by this whitespace can be hundreds of kilobytes or even megabytes of unncecessary data. We'll discuss in the next article how to keep the software engineers, internet users and website owners happy.
Poor design is often responsible for poor performance. Overbloated themes and templates are common on the internet, they typically look great, but contain huge images, and massive amounts of code, which the typical website never needs. Some website designers also take shortcuts copying websites styles they have used before, but which increases page load time. Designs should be lean and only contain the styles and code necessary to run the website. For each file that the website requires it has to make a separate request to the webserver, which has to find the file, load it, and transfer across the internet to your browser. it's imperative to minimse the number of files required by a website, and again thankfully there's techniques to optimise even poorly designed websites. The previous statements don't mean that you must have a text only website, you can have an image and video rich website, but you must design the website with this in mind.
When you copy and paste text from a word processor into a web page, it will typically copy in all of its internal formatting code, so you see on the screen
This is some formatted text but the code to produce "This is some formatted text" when pasted from a wordprocessor is huge - take a look yourself
It's very tempting to opt for cheap hosting, it's all same right? in reality that's not true. If you are using a shared hosting platform it's possible that you are sharing an ip address (and hardware) with up to 2000 other websites. To maximise the benefit to the service provider (did I say that, I meant to say client), they impose restrictions on what optimisation functions a website can use in order to restrict the amount of CPU and memory used by a particular website. The hosting platform will also be sharing disk with other applications in the datacenter. All of which can contribute to either a slow or inconsistent response time from the webserver. Google likes your webserver to respond in under 200 milliseconds. Remember your webserver has to serve each file required to render your web page. So hosting is an incredibly factor important in response time.