This article covers following topics:
Before we can publish our application to the world, at least we need to test if it works in our environment. And here how you can set up working Ruby environment that will use IIS as a web server. Windows XP, Vista, Windows 7, Windows 2003, Windows 2008 (R2) and Windows 2012 are supported.
First please download and install Microsoft Web Platform Installer ( WebPI ), run it and click Options. Add the link to Helicon Zoo Feed http://www.helicontech.com/zoo/feed.xml into the «Custom feeds» field.
Please note web server choice – IIS Express or IIS. The main difference is that IIS Express runs as interactive user application which usually means Administrative permissions if you are logged in as Administrator. This simplify development process and decrease number of possible issues you may encounter with insufficient NTFS permissions to run application. With IIS web applications are executed as a restricted user and may require additional permissions tuning, but you will get environment that is more close to those which will be used to run application in production. In this article we will be using IIS Express with WebMatrix as a development environment and IIS for production.
After adding custom feed new Zoo tab will appear with Applications, Templates, Packages, Modules and Engines sections in it.
The good thing about Helicon Zoo is that creating new application and installing application environment is done in one step because application will check and install all needed dependencies automatically. Please go to Zoo –> Templates, choose Ruby project and install it.
Depending on the server chosen (IIS or IIS Express) and your system configuration many components may be downloaded and installed first time. Hopefully everything goes right and all sites with required components will be online, so your environment should be configured soon.
Warning: If you have already manually configured Ruby environment, please use packages that come with Helicon Zoo through the feed and Web Platform Installer instead. The Helicon Zoo packages has nothing magical inside, we simply install Windows Ruby Installer and DevKit packages from official web site in default locations (like C:\Ruby19, C:\Ruby18, C:\Ruby20) which is important since these locations are hardcoded in some other components, set correct NTFS permissions for IIS_IUSRS group, etc. And even though it is possible in theory to use your custom Ruby installation with Helicon Zoo, our experience shows that it will most likely bring numerous issues. Helicon Zoo requires default Ruby configuration and needs no globally installed gems – this is the idea of isolation. If you configure your Ruby instance manually we may be unable to provide support for it.
If you have chosen IIS Express as a web server, after installation is finished the WebMatrix will be launched automatically and you should see project’s index page with further instructions. With IIS installations additional step is required where you choose port and host name bindings, a folder on disk where to put web site, etc. IIS Express web sites are created in \Documents\My Web Sites\.
This Welcome page is displayed when no Ruby application is present in the folder using URL Rewriting Module, therefore Microsoft URL Rewriting Module is a dependency for Ruby project template, otherwise you may see the error. When application with a config.ru file appears in the folder this page will be substituted by a real index page.
The initial project’s content is very simple:
An empty console folder that is simple placeholder if you wish to configure authentication for web console access later. Public folder that is used to store static files (including welcome page), deploy_sample.rb file that we will talk on later and web.config file that is essential to configure Zoo application in this folder. If you have some existing Ruby Rack application you can simply copy it’s files into the root folder keeping existing files intact and it should start working with very little or no modifications at all.
The instruction on Welcome page asks us to click on a link and start Web console, but we will learn one more tool before – Helicon Zoo Manager. Go to Start menu, Helicon –> Zoo –> Helicon Zoo Manager (either for IIS or IIS Express depending on the server type you are using). This manager lists all your current web sites and here you can modify configuration of Helicon Zoo Module, enable or disable engines, set environment variables, start Web Console or IDE, etc. By default Helicon Zoo will run Ruby application using Ruby 1.9 Rack and you can change this using Helicon Zoo Manager. After launching the manager select the web site you are currently working with in the left tree. The only application named ruby.project will be pre-selected, so let’s click on Edit button and change RACK_ENV environment variable from production to development. Press Apply on the properties window and Apply on the main window to save settings into web.config. Production mode ensures faster operation, but with development mode we don’t need to restart IIS Express application every time we modify some files in the application and additionally we get more verbose error messages. With some hosting services you can have two different web.config files for production and development environment which is convenient.
Then click Start web console button to launch console for this application.
You may ask why using this web console if I can just run cmd.exe or any other IDE to run commands? The answer is because Helicon Zoo web console is designed to run commands in the isolated environment of your application, so all these commands are applied to the application you are working with, using local folders and environment variables and executed by the same interpreter and same IIS Application pool user that runs the application itself. This is needed to keep applications portable because all modules and components are installed into the application folder and execution environment is easily replicable with Helicon Zoo Hosting Packages installations on other machines. On the other hand if you launch Windows console from start menu you may actually have number of environments and interpreters installed on your machine, like several different versions of Ruby and Python. With Windows console when you run a command you can’t tell for sure what exact version of interpreter you are calling, where it is located, where will it store it’s settings, etc. IDEs and commands to install modules will usually install them globally into the system, so your application will lose portability. There could be conflicts between different versions of engines or modules installed in the system when you run global command line interface. This is why it is always recommended to install web application engines like Ruby or Python distributions using Helicon Zoo repository, for example by installing Hosting Packages, instead of installing engines manually. And use of Helicon Zoo web console or launching IDE from Helicon Zoo Manager may also be essential if you want to avoid version conflicts and retain application portability.
This console runs by Zoo Module as HTTP application in your browser. Anonymous remote requests to console is prohibited by Zoo engine for security reasons. So if you wish to access console on a remote server you will have to enable one of the authentication methods for the console folder (or whatever location you have configured as a console). Or you can use IIS Manager to connect to remote server and start console from Helicon Zoo IIS Manager snap-in. The Helicon Zoo Manager installs snap-in for Internet Information Services Manager, which you can use even in remote mode, when IIS Manager is connected to a remote server.
The one-time hash code will be used to authenticate console session and will be invalidated when you close the console window. The ability to start web console can be enabled or disabled globally and for individual applications which is useful for hosters. Please read more about web console here.
So start web console and type:
gem install rails
And go get some coffee, because this command takes really, really long and works mostly silently. Please be patient and don’t try to restart the console, honestly, it is working. For me sometimes it takes about 20 minutes of ‘installing Rails’, which I guess, depends on internet speed, system configuration, etc. If you close console window it will kill the console process on the remote server within 10 minutes and all spawned processes as well. Same if you click on Cancel button in the bottom right corner – this will kill console and all child processes on remote (or local) machine immediately and restart the console. This button is useful if something went really wrong, like a command hang. After command is finished you should see normal output of ‘gem install’ command.
Notice short path in console – these are needed because many Ruby scripts don’t work well with long path. And forget about umlauts and other national characters in file names. If you need them in links, you can add them later using URL mappings. The ‘gem install’ command can not only download and install gems, but thanks to DevKit can also compile native C-gems right on your system.
Let’s look into our project file structure now. You may need to hit F5 on the root node in WebMatrix to refresh it.
Notice new GEM_HOME folder – this is where all gems have been installed. When the application is executed Helicon Zoo will use gems from this folder. The sad thing here is that you will have to repeat ‘gem install’ operation for each application you are working on. Though you can still have global gems it is not recommended. But the good thing is that you may have different versions of gems for each application. You may have different versions of Rails, Sinatra or anything else for every application with no conflicts. You can update Rails for one application and still have your legacy code running on older versions in other sites. Isolation – is the key feature of Zoo.
So rails and other dependent gems are installed, but this is not a Rails application yet. Let’s create one by running the following command:
rails new .
Now if we look at web site content we’ll see new folders and files – new Rails application is finally created.
Refresh the application page in the browser and you should see Ruby on Rails welcome page now (may require IIS Express application restart sometimes):
Another useful feature of Helicon Zoo Manager is the ability to start IDE for the application environment. This is not just a shortcut to your favorite IDE. Before launching IDE Zoo Manager will configure environment according to the environment variables of the selected application. Most current IDEs can read these environment variables to configure locations correctly. Locations like gem installation folders, working directories, Path variable with correct locations of Ruby interpreter of required version, etc. Open Helicon Zoo Manager and click on Start IDE. When you do this first time for the application a small Select IDE dialog will appear. By default it opens Windows Command Line (cmd.exe) and this is a convenient replacement for the Web Console if you develop application locally. This command line interface will be launched with all path configured for your application, therefore ‘gem install’ command will install gems into the application folder same as with web console. The difference with web console is this cmd.exe is executed as interactively logged on user, while web console is executed as IIS application pool user, which may differ in permissions significantly. So for development purposes and on local machine using Start IDE command is even more convenient than web console.
But instead of using ascetic command line you can configure your favorite IDE to start with this command. Environment variables will be configured before launching application and IDE will know correct locations of files, like GEM_HOME, location of Ruby interpreter, etc. For example to start Aptana with the project’s folder already open you can use following command:
AptanaStudio3.exe "%APPL_PHYSICAL_PATH%
If you prefer IDE with debugger and refactoring tools, we recommend one of the following options:
Each of these offers a bunch of development and testing tools for Ruby apps and supports version control.
For those who will make do with simpler solutions there are:
For our app we used Aptana:
Ruby on Rails is based on the MVC architecture (model, view, controller). This approach has several advantages:
Model defines the database structure in terms of object-oriented programming. In Rails model is an ordinary class inheriting all necessary functions from ActiveRecord::Base class. Instance (object) of this class defines one line from the corresponding database. Thus, models conceal the peculiarities of interaction with particular DBMS from developer.
View is the interface shown to users. On this stage developer creates templates which are transformed into HTML, CSS orJavaScript code.
Controller connects model with view. It’s usually controller that contains the main logic. Essentially, controllers are Ruby classes. Each public method of controller is called action. If you have controller named “Home” and it contains the method named “index”, then running /home/index in browser will evoke “index” action.
When a request comes to the application, the routing mechanism (in config/routes.rb file) decides which controller cares about that type of requests. Aside from the URL a set of other conditions may be taken into consideration, e.g. you can assign different controllers for different browsers, for mobile clients etc.
So, having chosen the controller, it defines what action will process the request. At this point one can also apply numerous conditions. The action itself performs some calculations and DB-related operations. When the action is finished the view comes on the scene. The data from DB or some result are transmitted to the templates. Then the HTML page is generated from the templates (there are templates from CSS and JavaScript as well) and the response page in sent to the user.
The classic example of Rails app is the simplest blog. We won’t ignore this tradition. So, let’s create controller “Home” with action “index” – this is going to be the blog main page. This is done with the following command:
rails g controller home index
If we now request /home/index, we’ll get the page from the template created for “index” action.
Afterwards, we’ll create a simple Post model which will define each blog entry in the DB. In Ruby code the model is a class and it is represented in DB as a table. Thus, the object of Post class is a line in the corresponding table in database.
To create model you can simply run “rails g model Post…” but let’s make use of a very handy tool – scaffolding. The “rails g scaffold” command creates not only the model class itself and tests for it, but also actions drafts and views templates for adding, editing and removing model objects. If we execute this
rails g scaffold Post name:string title:string content:text
we’ll get “Post”model in app\models\post.rb, “Posts” controller in app\controllers\posts_controller.rb with actions index, show, new, edit, update, create and destroy, plus a DB migration scenario in db\migrate. The command will also have created the headers for tests and HTML templates. Notice that we haven’t yet written a single line of code!
Now we are going to install the database. In this example we use SQLite which is default for Rails. However Rails can work with many other DBMS hiding details of their cooperation from the user. Please call the following command to install SQLite:
gem install sqlite3
Next, we’ll run the command to create the database (if it’s not yet created) and the table “posts” with the fields “name”, “title” and“context”:
rake db:migrate
The command for database migration is applied for creating and editing database structure in accordance with our object model. It should be executed every time you make changes to the application model. All the magic of matching the DB structure with our model is done automatically and all the data stored in the database are retained.
Actions of “Post” controller are accessible at /posts/ address.
If you press “New post”, you’ll see the form:
After filling all fields we get to the new post page:
Note that there was still no code written. Now let’s do some editing. For instance, we may need to make post name and post title obligatory fields so that corresponding cells in the DB are always non-empty. Luckily, Rails provides very simple validation mechanism. We should just fix the model file, which is app\models\post.rb as follows:
class Post < ActiveRecord::Base validates :name, :presence => true validates :title, :presence => true, :length => { :minimum => 5 } end
In there we specify that “name”and“title” fields are obligatory and “title” should contain not less than 5 characters. No need to apply migration after this change as validators are not directly related to database; the check is performed on the level of Ruby code.
If you now leave the “name” field empty, you’ll get an error:
Let’s make it a little more complicated and add the comments feature. We’ll create a “Comment” model with the following command:
rails g model Comment commenter:string body:text post:references
Pay attention to “post:references” parameter. It connects «comments» table with “posts” table.
Now refresh the database:
rake db:migrate
next, we’ll set up the relation “has many” for Post model:
class Post < ActiveRecord::Base validates :name, :presence => true validates :title, :presence => true, :length => { :minimum => 5 } has_many :comments, :dependent => :destroy end
The code is intuitively clear. Each Post object may have a number of comments. “:dependent => :destroy” tells that when post is deleted all comments are deleted as well. As we didn’t use scaffolding this time to create a model for comments, we now need to generate corresponding controller:
rails g controller Comments
In config\routes.rb file replace “resources :posts” with:
resources :posts do resources :comments end
In this way we specify how the “comments” controller will be available. In our case it’s put into “posts”, so the links will look like http://localhost:41639/posts/1/comments/3
Then we need to refresh the template app\views\posts\show.html.erb so that it will become possible to add comments. After:
<p> <b>Content:</b> <%= @post.content %> </p>
add the following code:
<h2>Comments</h2> <% @post.comments.each do |comment| %> <p> <b>Commenter:</b> <%= comment.commenter %> </p> <p> <b>Comment:</b> <%= comment.body %> </p> <p> <%= link_to 'Destroy Comment', [comment.post, comment], :confirm => 'Are you sure?', :method => :delete %> </p> <% end %> <h2>Add a comment:</h2> <%= form_for([@post, @post.comments.build]) do |f| %> <div class="field"> <%= f.label :commenter %><br /> <%= f.text_field :commenter %> </div> <div class="field"> <%= f.label :body %><br /> <%= f.text_area :body %> </div> <div class="actions"> <%= f.submit %> </div> <% end %>
Finally, we’ll define the logic of controller operation in app\controllers\comments_controller.rb
class CommentsController < ApplicationController def create @post = Post.find(params[:post_id]) @comment = @post.comments.create(params[:comment]) redirect_to post_path(@post) end def destroy @post = Post.find(params[:post_id]) @comment = @post.comments.find(params[:id]) @comment.destroy redirect_to post_path(@post) end end
And everything is ready for adding comments to the post:
The basic functionality is implemented. As the last step we’ll protect some of the actions, so that unwanted people don’t have access to them. The more comprehensive way is to use registration, sessions, cookies etc., but to stay simple we’ll take Basic authentication, the more so because in Rails we need only one line to enable it. Put the following in posts_controller.rb:
http_basic_authenticate_with :name => "admin", :password => "123", :except => [:index, :show]
We have hard coded the login and password. “:except”parameter excludes “:index” and “:show” actions as they don’t need authentication.
So, we’ve created the application and now, logically, want to publish it to the Internet. For that purpose we’ll set up Windows server to work with Ruby in production. We’ll have to repeat several steps from the beginning of the article which were used to install development environment, but this time we’ll configure production IIS server. If you are considering to organize Ruby hosting on Windows servers you will need to complete exactly the same steps as following:
Now install Ruby Hosting Package from Zoo –> Packages.
This will install Ruby 1.8, Ruby 1.9, Ruby 2.0 and Development Kit packages from RubyInstaller normally into C:\Ruby18; C:\Ruby19 and C:\Ruby20 folders respectively and set correct NTFS permissions for these folders for default IIS installations. You can update these installation packages manually then, leaving installation folders as they are, because these folders are hardcoded in some WebPI components. But remember – installation packages form Zoo repository has been tested with Zoo while you are upgrading on your own risk. You may consider that installing all three packages of Ruby is waste of resources when you need only one, but they are small and installation completes fast so it doesn’t worth worrying. Plus with Zoo system these packages will not conflict. Ruby Hosting Package may also install IIS and other required Windows components if they are not already installed in the system. Eventually it will install Helicon Zoo Module which is required to host these applications with IIS.
After these steps are completed the server is ready to host our application. For the moment the following server platforms are supported: Windows Server 2008 and 2008 R2 and Windows 2012, all 32-and 64-bit versions when applicable. The reason why older versions are unavailable is because Helicon Zoo Module uses native IIS 7 API, therefore everything prior IIS version 7 is unsupported and all newer versions of IIS should be fine.
So, at first, we create an empty web-site via IIS manager or your hosting panel. Then simply upload entire web site folder with your application to the server via FTP or Web Deploy or any other way. I would recommend to configure Web Deploy on the production server. This tool makes deployment of applications from WebMatrix or Visual Studio really easy, plus all application folders and files will be given proper permissions automatically, as they’ve been set by Helicon’s Ruby project template. Generally you may need to enable write permissions for entire web site folder for the user running application because Ruby application will want to write things sometimes. You can also use Git or any other version control system and deployment system, but that falls beyond the scope of this article, same as write permissions fine tuning.
Then, in general, you just navigate to the web site and it opens. The application will be executed on Windows Server under IIS with the help of Helicon Zoo Module. This module was initially designed as a hosting solution, so all applications are isolated and do not interfere. The module with its default options works in fully automatic mode, creating one worker process per application by default, when the load is low and increasing the number of workers up to the number of CPU cores of the server providing maximal performance under heavy loads. These settings can be changed on per-engine level using Helicon Zoo Manager.
Helicon Zoo implements the concept of engines and applications. The engines define how to run an application, which interpreter to use, which protocol and port, what maximum number of workers are allowed and other global settings which are defined in applicationHost.config, so if the user has no write permissions to applicationHost.config then he can’t change engine settings. Then, Helicon Zoo application ‘uses’ the engine by referencing it from web.config file from inside web site folder. Each engine may have list of parameters – Environment Variables, which users can set. This concept allows for separation of hosting administrator duties from the clients’ burden (and client from client as well). You can learn more about Helicon Zoo Module configuration here.
Sometimes, when your application is not just an empty database-less blog as in our example, simply copying files to the production environment is not enough. For example, your application may use external database server and you may need to execute database migration tasks in production environment before the new code could be executed. For this purpose Helicon Zoo offers very convenient tool called deploy scripts. Please notice DEPLOY_FILE=”deploy.rb” environment variable in the Helicon Ruby project template. This means every time when Helicon Zoo engine finds deploy.rb file in the root of the web site it will do the following:
Here is how Application deployment in progress message looks by default:
In Ruby project template there is a file named deploy_sample.rb. This file contains common deployment instructions for database migrations, etc. So to initiate deployment process you will only need to rename that file from deploy_sample.rb into deploy_rb and push it to server. You may want to make it your last change to the project when you upload changes so to make sure all other scripts and files has been updated before deployment process initiates. If the RACK_ENV environment variable is set to production, Ruby will not load updated code files unless restarted, so initiating deployment process will do all things synchronously – migrate database and then load new code into engines. This is so-called “cold application maintenance” which is needed because if other user requests (with either new or old code) will be running while database migrations and other deployment tasks are executed the data could be corrupted or user could get unpredicted responses. Helicon Zoo minimizes application downtime to even seconds and automates whole deployment process so it becomes easy to deploy large applications across array of servers.
Please read more about deployment scripts in Helicon Zoo documentation.
Machine used as a server contained Core 2 Quad 2.4 Ghz, 8 Gb RAM, 1Gb LAN. For load generation we took more powerful PC and Apache Benchmark tool with a command “ab.exe -n 100000 -c 100 –k”. For Apache and Nginx servers we used Ubuntu 11.04 Server x64. IIS 7 tests were run on Windows Server 2008 R2. No virtual machined where used – bare hardware.
We’ve conducted three test scenarios:
We performed tests with Ruby 1.9.3, Rails 3.1.1 and MySQL 5.1.54. in case of HTTP transport, Thin acted as a backend HTTP service. Neither Unicorn nor Passenger work for Windows. So, there were three configurations for testing: Windows + IIS 7.5 + Helicon Zoo + Thin, Ubuntu + Apache + Passanger, Ubuntu + Nginx + Thin.
Below are tests results (in requests per second):
Here are more detailed ab graphs for the first test (time output):
Resume
Ruby on Rails proved itself a perfect framework for quick and easy web development. Of course, Ruby is not the one and only. In the next articles we’ll shed some light on Goliath and Sinatra.
We would also like to underline that Windows is a mighty platform for both development with Ruby and running Ruby-apps in production. And if earlier the difference in Ruby performance on Linux and Windows was dramatic, now the performance, as well as convenience of Ruby on Rails for Windows has significantly improved to a degree that performance is no more a key criteria for choosing the platform.
]]>
A new tab named “Zoo” should appear on the main page of Platform Installer.
Web Paltfrom Installer will start downloading and installing required components, which include Python 2.7.2, Django 1.3, Helicon Zoo Module, MySQL and OSQA itself.
Default administrator user name for MySQL is ‘root’ and default password is empty.
Installation will configure database and run migration scripts, you can now launch web site by clicking on a link:
Congratulations! You have finished installation and may start using OSQA on your Windows server with Microsoft IIS in production:
]]>
A new tab named “Zoo” should appear on the main page of Platform Installer.
This will automatically download and install all required components, including Ruby, Rails, Helicon Zoo Module and Redmine itself.
When you install Redmine into a sub-directory additional configuration is required. Please open config/environment.rb file and add the following line at the bottom:
Redmine::Utils::relative_url_root = ENV[ 'APPL_VIRTUAL_PATH' ]
See http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_in_a_sub-URI for more information.
Redmine supports multiple database engines. By default SQLite is installed, however there are example configuration files for MySQL and PostgreSQL within “config” folder of Redmine application. If you wish to use MySQL, for an instance, take “database.yml.mysql” file; name it as “database.yml” and alter according to your MySQL database settings. If you have already run deployment script, then you need it run it again after database change. Rename deploy_done.rb file in the root web site folder to deploy.rb and request some page. Deployment process will be initiated and database migration tasks will be executed.
Sometimes you may need to manually execute tasks against Ruby application. Use Helicon Zoo web console for this purpose. To start web console go to Windows Start –> Programs –> Helicon –> Zoo –> Helicon Zoo Manager select your web site and click Start Web Console button. Then use this console to run for example “rake db:migrate” command and see it’s output in real time.
If you need to install some specific or modified version of Redmine here you can find manual installation instructions that you can adapt for your needs.
First you will need to run Web Platform Installer, go to Zoo –> Templates and install Ruby Project as you normally install application on either IIS or IIS Express.
This will install generic Ruby project running with Zoo and all requirements, including Ruby, DevKit, Zoo Module, etc. After the web site is created you should see Ruby project’s welcome page:
Download Radmine package as a ZIP file here: http://rubyforge.org/frs/?group_id=1850 Unzip content of redmine-x.x.x folder from the Zip file (where config.ru file is located) directly into the web site’s (or application’s) root, i.e. config.ru file will be in the same folder with web.config file.
Now we need to set up database for use with Redmine. You can follow recommendations to create database on this official page: http://www.redmine.org/projects/redmine/wiki/RedmineInstall
The simplest way is to use SQLite as a database because it does not require any database installation. To use it you just need to create a config\database.yml file in the Redmine installation with the following content:
# SQLite version 3.x
development:
adapter: <%= "jdbc" if defined?(JRUBY_PLATFORM) %>sqlite3
database: db/development.sqlite3
timeout: 5000
# Warning: The database defined as 'test' will be erased and
# re-generated from your development database when you run 'rake'.
# Do not set this db to the same as development or production.
test:
adapter: <%= "jdbc" if defined?(JRUBY_PLATFORM) %>sqlite3
database: db/test.sqlite3
timeout: 5000
# Warning: The database defined as 'cucumber' will be erased and
# re-generated from your development database when you run 'rake'.
# Do not set this db to the same as development or production.
cucumber:
adapter: <%= "jdbc" if defined?(JRUBY_PLATFORM) %>sqlite3
database: db/cucumber.sqlite3
timeout: 5000
production:
adapter: <%= "jdbc" if defined?(JRUBY_PLATFORM) %>sqlite3
database: db/production.sqlite3
timeout: 5000
Then open Gemfile, find a line with RMagick dependency and comment it out. This is needed because current versions of RMagic does not compile on Windows platform. This gem is optional.
# Optional gem for exporting the gantt to a PNG file, not supported with jruby platforms :mri, :mingw do group :rmagick do # RMagick 2 supports ruby 1.9 # RMagick 1 would be fine for ruby 1.8 but Bundler does not support # different requirements for the same gem on different platforms # gem "rmagick", ">= 2.0.0" end end
Because first Redmine start may take longer than default 30-second request timeout, open web.config file and add WORKER_REQUEST_TIMEOUT environment variable as shown below:
<heliconZoo> <clear /> <application name="ruby.project" > <environmentVariables> <add name="RAILS_RELATIVE_URL_ROOT" value="%APPL_VIRTUAL_PATH%" /> <!-- Use this APP_WORKER with HTTP Ruby engine and Thin. Thin need to be installed. --> <!-- <add name="APP_WORKER" value="GEM_HOME\bin\thin start" /> --> <!-- <add name="APP_WORKER" value="%APPL_PHYSICAL_SHORT_PATH%\app.rb" /> --> <!-- Deploy file includes the most common commands required to prepare application before launch (bundle install, migrations etc.) It is also possible to specify here any script which evenually will be run by rubyw.exe. --> <add name="DEPLOY_FILE" value="deploy.rb" /> <!-- By default we run Rails in production mode --> <add name="RACK_ENV" value="production" /> <!-- Web console location --> <!-- security rules for console are placed in /console/web.config --> <add name="CONSOLE_URL" value="console" /> <add name="WORKER_REQUEST_TIMEOUT" value="200" /> </environmentVariables> </application> </heliconZoo>
Or you can use Windows Start –> Programs –> Helicon –> Zoo –> Helicon Zoo Manager application to edit environment variables instead of editing web.config manually:
Rename deploy_sample.rb file from Zoo Ruby project template to deploy.rb:
And request any page on a site – this will initiate deployment process.
After deployment is completed you should see Redmine’s home page. Then proceed with Admin section as form previous chapter.
Sometimes you may need to install Redmine on a machine that is not connected directly to the Internet. Here is the instruction how to prepare offline installation package that you can use to install as many copies of Redmine as you need.
First you will need another machine that is connected to the Internet to prepare Redmine on it. Please follow instructions above and install Redmine using Helicon Zoo. Make sure everything works as you need and all modules and components are installed into web site’s GEM_HOME folder. Helicon Zoo applications are self-contained – this means you can simply move application from one machine to another just by copying web site folder as long as required Zoo Hosting Package (in our case Ruby Hosting Package) is installed on this machine. So we will need to prepare offline package to install Ruby Hosting Package. Fortunately Web Platform Installer provides a command line tool to do this.
On the Internet-enabled machine open command line interface (this could be simple command line or PowerShell) and navigate to Web Platform Installer folder; this is usually C:\Program Files\Microsoft\Web Platform Installer. Then run the following command:
WebpiCmd.exe /offline /Products:RubyHostingPackage /Path:C:\ruby-offline /Feeds:http://www.helicontech.com/zoo/feed.xml
With the /offline key WebpiCmd.exe will download all possible dependencies for the selected product and save downloaded files plus special feed section required for offline installation in the c:\ruby-offline folder. After generation is completed simply copy resulting folder to the Internet-restricted machine. You don’t even need to install Web Platform Installer on this machine because generated folder already contain WebpiCmd.exe file in bin location.
So open command line on Internet-restricted machine and run this command:
WebpiCmd.exe /install /Products:RubyHostingPackage /XML:C:\ruby-offline\feeds\latest\webproductlist.xml /Feeds:C:\ruby-offline\feeds\latest\supplementalfeeds\zooproducts4.xml
This will install Ruby Hosting Package and all possible requirements to run Ruby applications on this server. After installation is completed simply create an empty IIS web site and copy content of Zoo Redmine web site from the Internet-enabled machine and run the site. Please don’t try to make offline copies of Redmine application directly from feed because these packages require Interned access to run ‘gem install’ commands and other deployment tasks. So you will need to copy working and completely tuned Redmine web site instead of trying to install fresh Redmine on Internet-restricted machine.
]]>In this article we are going to address basic question which might occur in minds of green web-programmers and those who is going to dive into Node.js learning, namely:
And the performance tests at the end of the article will try answer a reasonable question “Why would I need to learn Node.js.”
So, let’s start…
Node.js is an event-oriented Javascript-based framework for development of web applications. The core concept is that nothing gets blocked during code execution – operations waiting for data transfer, data input, connection establishment or anything else are not present. Everything is based on events, which occur at the moment synchronous operations are waiting for. This sometimes leads to dramatic—dozens of times—performance boost in comparison with old sync systems. With the release of version 0.6.0 of November 2011 Node.js assembly for Windows in announced stable.
To begin it’s necessary to download and install Web Platform Installer, run it, click Options and put Helicon Zoo Feed link http://www.helicontech.com/zoo/feed/ into “Display additional scenarios” field:
This adds Zoo tab in Web Platform Installer:
Under Zoo -> Engines there’s a list of all available web engines, including Node.js. However, we recommend opting for Node.js Package which incorporates not only Node.js itself but also several highly useful modules. So, got to Zoo -> Packages -> Node.js Hosting Package and do Add, Install.
To see all currently supported web frameworks and applications visit Helicon Zoo Gallery. After you agree to the license agreements, it starts downloading and installing IIS (if not yet there), Helicon Zoo Module and node.exe for Windows.
An important system component is Node Package Manager (npm) used for installation of additional modules. Starting from version 0.6.5 of Node.js Node Package Manager declared stable and now is included into Node.js Hosting Package.
Now Node.js is installed and to start doing applications for it it’s reasonable to use WebMatrix templates. These templates simplify creation of blank draft apps which might be used for further development.
To install them follow: Zoo -> Packages -> WebMatrix Templates
If you don’t have WebMatrix – not to worry – it will be downloaded and installed automatically together with installation of templates. After the installation run WebMatrix and choose «Site from Template»:
As you can see, Node.js is not the only framework favoring from WebMatrix templates.
If you follow the URL of the newly created Node.js Site or press Run, you get common «Hello, World!» page.
By default the new site includes framework express for easy web-apps creation. The framework and its dependencies reside in node_modules folder under the site root which is good for deployment of the app on remote server.
The folder “public” is used to store static files. Any file put into this folder will be processed directly by IIS as static, not evoking Node.js. This is especially important to avoid accidental execution of client *.js files on the server.
web.config file contains URL Rewrite rules for static content. Initially every request is checked against the content of public folder (i.e. whether it’s static). This is beneficial for some web apps which mix static and dynamic resources in one folder (often root). If your app is not doing so, you can delete Microsoft URL Rewrite rules from web.config and refer to static files by explicitly specifying public folder.
Additionally, web.config includes configuration directives required for launch of Node.js and Helicon Zoo Module on that site.
One of the advantages of Node.js is that JavaScript is a popular language widely used in web development. It means you won’t have problems choosing the editor. Free WebMatrix editor is ok to start with.
To illustrate capabilities of asynchronous web frameworks most write a chat. So, the most known demo-app for Node.js is a chat http://chat.nodejs.org/, its source code is available for examination.
We’ve also decided to make a chat – very primitive one – with no users, no sessions, no scrolls and message editing. It can only transmit asynchronous messages to show how long-polling works.
We’ll make use of previously-created Node.js Site. We need to edit server.js and index.html.
Here’s the source code for server.js:
var express = require('express');
var callbacks = [];
// Sends messages to clients
function appendMessage(message){
var resp = {messages: [message]};
while (callbacks.length > 0) {
callbacks.shift()(resp);
}
}
// Creation of express server
var app = module.exports = express.createServer();
app.use(express.bodyParser());
// Simply respond with index.html
app.get('/', function(req, res){
res.sendfile('index.html');
});
// Process messages from client
app.post('/send', function(req, res){
var message = {
nickname: req.param('nickname', 'Anonymous'),
text: req.param('text', '')
};
appendMessage(message);
res.json({status: 'ok'});
});
// Wait for new messages
app.get('/recv', function(req, res){
callbacks.push(function(message){
res.json(message);
});
});
// Listen to the port
app.listen(process.env.PORT);
and index.html
<html>
<head>
<title>Node.js Zoo Chat</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript">
// Initialization after page load
$(document).ready(function(){
$('form#send').submit(onSend);
longPoll();
$('#nickname').focus();
});
// Send message on pressing Submit
function onSend(eventData){
eventData.preventDefault();
var msgArr = $(this).serializeArray();
var message = {
nickname : msgArr[0].value,
text : msgArr[1].value
};
$.post('/send', message, function (data){
$('#text').val('').focus();
},
'json');
}
// Called when new message is available
function longPoll(data){
if (data && data.messages) {
for (var i = 0; i < data.messages.length; i++) {
var message = data.messages[i];
$('<p><b>'+message.nickname+':</b><span>'+message.text+'</span></p>').hide().prependTo('#messages').slideDown();
}
}
// message processed, wait for new messages
$.ajax({
cache: false,
type: "GET",
url: "/recv",
success: function(data){
longPoll(data);
}
});
}
</script>
</head>
<body>
<h1>Node.js Zoo Chat</h1>
<form action="/send" method="post" id="send">
<label for="nickname">Nickname:</label> <input name="nickname" size="10" id="nickname" />
<label for="text">Message:</label> <input name="text" size="40" id="text" />
<input type="submit">
</form>
<div id="messages"></div>
</body>
</html>
To apply changes press Restart and then Run:
Now we can make sure the chat works by running it in two browsers:
The flexibility and usability of each web framework may probably be defined by availability of extra modules and applicability of third-party technologies. At present Node Package Manager declared stable for Windows, so you ca use it safely for modules installations. NPM is included in Node.js Hosting Package and we have plans to include it into Node.js engine installation itself very soon – just as new Node.js MSI installer will be applicable to use in Helicon Zoo repository. And there’s one trick to remember – by default npm installs modules to node_modules folder under the folder it was invoked from. This is good for further application deployment on remote server as all required modules are included into application folder itself. Thus, to install module for site, go to the site root and execute:
C:\>cd "C:\My Web Sites\Node.js Site" C:\My Web Sites\Node.js Site>C:\node\npm.cmd install mongodb npm WARN [email protected] package.json: bugs['web'] should probably be bugs['url'] > [email protected] install C:\My Web Sites\Node.js\node_modules\mongodb > node install.js ================================================================================ = = = To install with C++ bson parser do <npm install mongodb --mongodb:native> = = = ================================================================================ [email protected] ./node_modules/mongodb
Another thing worth mentioning is that no all existing modules work for Windows. E.g. the perfect node-sync library won’t work for Windows. This library helps avoid monstrous callbacks paradigm without losing Node.js asynchronous nature, but it relies on node-fibers, which do not work on Windows. Hopes are that in future fibers will be implemented directly into Node.js.
Nevertheless, most modules are fully operational on Windows.
If you start working on a more or less complex Node.js project, sooner or later you’ll come to a conclusion that JavaScript is not that friendly. Tons of braces, loads of unnecessary constructions – all that stuff doesn’t add to code readability and makes code management more complicated. Luckily, you are not the first one to notice that, so the problem is already solved. There are many derivative languages based on JavaScript or extending it. Here’s a short list FYI: http://altjs.org/
We take CoffeeScript as the trendiest for now. Code written with CoffeeScript is simple and easy to read. This code is then compiled into original JavaScript and executed. Moreover, JavaScript code may be converted into CoffeeScript. For example, server.js script from our chat in CoffeeScript becomes this:
express = require("express")
callbacks = []
// Sends messages to clients
appendMessage = (message) ->
resp = messages: [ message ]
callbacks.shift() resp while callbacks.length > 0
// Creation of express server
app = module.exports = express.createServer()
app.use express.bodyParser()
// Simply respond with index.html
app.get "/", (req, res) ->
res.sendfile "index.html"
// Process messages from client
app.post "/send", (req, res) ->
message =
nickname: req.param("nickname", "Anonymous")
text: req.param("text", "")
appendMessage message
res.json status: "ok"
// Wait for new messages
app.get "/recv", (req, res) ->
callbacks.push (message) ->
res.json message
// Listen to the port
app.listen process.env.PORT
Learn more about CoffeeScript: http://jashkenas.github.com/coffee-script/
To install CoffeeScript run: C:\node\npm.cmd install coffe-script
There’s a good tool for Node.js apps debugging — node-inspector. It is already included into node_modules folder in template of Node.js-site. node-inspector works as follows:
In root folder of template-based node.js-site there’s start_debug.cmd file which runs debugging for current app and opens pages in the browser for debugging.
This is how debugger looks in the browser:
Now we have an app and want to put in online. What we need is a server, and now putting together Windows server and Node.js is easy as never before. We only need to go through several steps we did at the beginning of the article (which we used to deploy working environment). Namely: install Microsoft Web Platform Installer, add Helicon Zoo feed into it and set up Node.js Hosting Package from Zoo repository. Done – server is ready to run our app. Server platforms supported include Windows 2008 and 2008 R2, x86 and x64.
What needs to be done next is create blank web-site on the server using IIS or hosting panel (if we are creating our own hosting) and copy the app into the site with FTP or WebDeploy. In case of WebDeploy all necessary permissions for the folders will be assigned. One can also use Git or any other version control system, but that goes beyond the scope of this article.
Initially, Helicon Zoo Module was being developed for configuration of hosting solutions. And all applications within Zoo are separate and do not interfere. The module itself with default settings operates in automatic mode creating one worker process, when the load is low, or bearing new workers (up to the number of processor cores) to ensure maximum performance when the load goes up.
Helicon Zoo adopts the concept of engines and applications. Engines define what to run and how, using which protocol and port, min and max number of workers allowed and other general settings, which are specified globally in applicationHost.config. Then under the site you can create an application relying on a particular engine and pass all parameters required for its flawless operation. This helps isolate hosting administrator work from clients and clients from each other.
Testing server characteristics: Core 2 Quad 2.4 GHz, 8 Gb RAM, 1Gb LAN. For load generation we used more powerful machine and Apache Benchmark command «ab.exe -n 100000 -c 100 –k». To test Apache and Nginx we took Ubuntu 11.04 Server x64. IIS 7 tests ran on Windows Server 2008 R2. Nothing virtual — 100% hardware.
We’ve conducted 3 tests. In the first one Node.js was supposed to output current time in high resolution. Time was chosen to be sure responses didn’t come from cache. Second test implied reading from MySQL database, third one — writing to DB.
Here are the results (values on the graph mean requests per second):
Impressive, isn’t it? Time to explain what these tests measure. It’s not entirely correct to call them performance tests as we do not compare different processors. While processor may have performance, web server measurement unit is quite the contrary — how much processor time it takes to process one request.
The first test measures raw expenses on request processing for each web-server and their ability to use processor resources. There’s no way this or that set of technologies can respond faster on this processor. Nginx on Windows was far behind on this test because on this system Nginx opens new connection to back-end upon each request. While Apache on Windows surprised with connection pooling and true threads.
The second and third test show how web-server expenses grow with the “weight” of request. However, they are influenced by a number of other factors, such as performance of file system, DB drivers and the DB itself. Out of curiosity we’ve tested the joint Windows + Zoo + MongoDB to see the difference with MySQL. And it gave 6793 rps for reading and 2906 rps for writing. Writing speed is truly amazing.
Another interesting fact is that software and hardware used in these tests is absolutely the same as in Django tests in this article. So their results may be compared. Undoubtedly Node.js scripts are much more light-weight, we didn’t use templates, ORM etc,, but anyway it’s worth thinking about.
Responding to readers’ requests we are posting detailed ab graphs. We’ve re-done the first test with simple time output, ‘case it most verbosely depicts web server operation. Config files and js scripts being tested may be downloaded from here. There are only includes, everything else left by default. Horizontal axis shows requests, vertical one — response time in milliseconds.
Windows, IIS7 + Zoo, “time output”:
Ubuntu, Apache, “time output”:
Ubuntu, Nginx, “time output”:
I believe that Node.js is a rather promising trend. It boasts great performance and flexibility. What is especially pleasing is that it’s equally good on both Unix and Windows and uses relevant technological solutions for each system which the tests vividly prove.
Support of Erlang and Java in Helicon Zoo is on the way. It will be of interest to compare performance of these technologies as well. For now Node.js is an unquestionable leader in performance among supported frameworks.
]]>
Web development process implies use of two relatively independent environments – development and production. Helicon Zoo may be used in production as well as on developer’s machine, or in both places. In either case the sequence of actions might be:
To start, you need to download Web Platform Installer from Microsoft website (http://www.microsoft.com/web/downloads/platform.aspx) and install it. WebPI already includes wide range of frameworks and applications for IIS like PHP, ASP.NET, WordPress, Drupal, phpBB. To launch Helicon Zoo add new feed to WebPI:
Run WebPI and click Options. In Custom Feeds box put http://www.helicontech.com/zoo/feed.xml and click Add feed:
Please note web server choice – IIS Express or IIS. The main difference is that IIS Express runs as interactive user application which usually means Administrative permissions if you are logged in as Administrator. This simplify development process and decrease number of possible issues you may encounter with insufficient NTFS permissions to run application. With IIS web applications are executed as a restricted user and may require additional permissions tuning, but you will get environment that is more close to those which will be used to run application in production. In this article we will be using IIS Express with WebMatrix as a development environment and IIS for production.
After adding custom feed new Zoo tab will appear with Applications, Templates, Packages, Modules and Engines sections in it.
The best thing about Helicon Zoo and Web Platform Installer is that creating new application and installing application environment is all done in one step because WebPI will check and install all needed dependencies automatically. Please go to Zoo –> Templates, choose Python project and install it.
Depending on the server chosen (IIS or IIS Express) and your system configuration many components may be downloaded and installed first time.
Warning: If you have already manually configured Python environment, please use packages that come with Helicon Zoo through the feed and Web Platform Installer instead. Even though it is possible in theory to use your custom Python installation with Helicon Zoo, we highly recommend to use packages from Zoo feed. If you ignore this requirement, troubleshooting your installation may be complicated.
If you have chosen IIS Express as a web server, after installation is finished WebMatrix will open automatically and you should see project’s index page with further instructions after short deployment process. With IIS installations additional step is required where you choose port and host name bindings, a folder on disk where to put web site, etc. IIS Express web sites are created in \Documents\My Web Sites\. During the deployment process special script, that will be explained later, creates a new Python virtualenv inside the folder of web site. This virtualenv should be used for all further installations of modules and components and Helicon Zoo will start application using this virtualenv instead of global Python settings.
Initial project content is rather simple:
An empty console folder that is simple placeholder if you wish to configure authentication for web console access later; static folder should be used to store static files (including Zoo welcome page); venv folder contains Python virtualenv; deploy_done.rb file will be explained later; requirements.txt file will contain project dependencies and web.config file that is essential to configure Zoo application in this folder. The virtualenv configured in this folder is a normal Python vritualenv, with the only difference that Zoo will not call python.exe file from this folder by default. With typical settings Zoo Python 2.7 engine will call python.exe file from the Python installation folder (i.e. C:\Python27), configuring all other variables to point to the virtualenv inside web site folder.
The instruction on Welcome page asks us to click on a link and start Web console, but we will learn one more tool before – Helicon Zoo Manager. Go to Start menu, Helicon –> Zoo –> Helicon Zoo Manager (either for IIS or IIS Express depending on the server type you are using). This manager lists all your current web sites and here you can modify configuration of Helicon Zoo Module, enable or disable engines, set environment variables, start Web Console or IDE, etc. By default Helicon Zoo will run Python application using Python 2.7 WSGI engine and you can change this using Helicon Zoo Manager. So, let’s click Start web console button to launch console for this application:
Then type the following command to download and install Django into the virtualenv folder:
pip install django
You may ask why using this web console if I can just run cmd.exe or any other IDE to run commands? The answer is because Helicon Zoo web console is designed to run commands in the isolated environment of your application, so all these commands are applied to the application you are working with, using local folders and environment variables and executed by the same interpreter and same IIS Application pool user that runs the application itself. This is needed to keep applications portable because all modules and components are installed into the application folder and execution environment is easily replicable by installing Helicon Zoo Hosting Packages on other machines. On the other hand if you launch Windows console from start menu you may actually have number of environments and interpreters installed on your machine, like several different versions of Python. With Windows console when you run a command you can’t tell for sure what exact version of interpreter you are calling, where it is located, where will it store it’s settings, etc. IDEs and commands to install modules will usually install them globally into the system, so your application will lose portability. There could be conflicts between different versions of engines or modules installed in the system when you run global command line interface. This is why it is always recommended to install web application engines like Python or Ruby distributions using Helicon Zoo repository, for example by installing Hosting Packages, instead of downloading and installing engines manually. And use of Helicon Zoo web console or launching IDE from Helicon Zoo Manager may also be essential if you want to avoid version conflicts and retain application portability.
This web console executed by Zoo Module as HTTP application in your browser. Anonymous remote requests to console are prohibited by Zoo engine for security reasons. So if you wish to access console on a remote server you will have to enable one of the authentication methods for the console folder (or whatever location you have configured as a console). Or you can use Windows IIS Manager to connect to remote server and start console from Helicon Zoo IIS Manager snap-in. The Helicon Zoo Manager installs snap-in for Internet Information Services Manager, which you can use even in remote mode, when IIS Manager is connected to a remote server.
The one-time hash code will be used to authenticate console session and will be invalidated every time when you close the console window. The ability to start web console can be enabled or disabled globally and for individual applications using Helicon Manager, which is useful for hosters. Please read more about web console here.
After Django installation is finished type the following command in console to create an empty Django project named ‘project’:
django-admin.py startproject project
Then as per instruction add the following environment variable using Helicon Zoo Manager: DJANGO_SETTINGS_MODULE=project.settings
Now, if you refresh web site default page you should see Django project welcome page:
Another useful feature of Helicon Zoo Manager is the ability to start IDE for the application environment. This is not just a shortcut to your favorite IDE. Before launching IDE, Zoo Manager will configure environment according to the environment variables of the selected application. Most current IDEs can read these environment variables to configure locations correctly. Locations like virtualenv folder, working directories, Path variable with correct locations of Python interpreter of required version, etc. Open Helicon Zoo Manager and click on Start IDE. When you do this first time for the application a small Select IDE dialog will appear. By default it opens Windows Command Line (cmd.exe) and this is a convenient replacement for the Web Console we’ve used in previous chapter and if you develop application locally. This command line interface will be launched with all path configured for your application, therefore ‘pip install’ command will install modules into the application folder same as with web console. The difference with web console is here cmd.exe is executed as interactively logged on user, while web console is executed as IIS application pool user, which may differ in permissions significantly. So for development purposes and on local machine using Start IDE command is even more convenient than the web console.
But instead of using ascetic command line you can configure your favorite IDE to start with this command. Environment variables will be configured before launching application and IDE will know correct locations of files, like GEM_HOME, location of Ruby interpreter, etc.
Below please find list of popular Python IDE’s that run on Windows:
Besides modules written on Python, there are some modules that needs to be compiled during installation procedure. These are so-called native modules. Normally compilation process is automatic and requires only presence of C++ compiler in the system. No write access to system folders required as compiler will save all output files into the module installation directory. The tricky thing here is the version of C++ compiler should be the same as version of the compiler used to build Python distribution itself. Python 2.7 package that is currently provided with Helicon Zoo has been built using Microsoft Visual Studio 2008 (v. 9.0). So to provide support for native modules installations you only need to install this version of Visual Studio in the system. There is a freeware version of this studio available from Microsoft which you can download here: Visual Studio 2008 Express.
After installation you need to restart Windows so the Python starts using this compiler to build native modules. If you have several versions of C++ compiler installed in the system, sometimes it is necessary to specify the exact version that will be used by Python. For this purpose you can set the following environment variable. Simply add it to the Python engines using Helicon Zoo Manager:
VS90COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\Tools\
Now, let’s create a simple Django application as a concept proof. For a convenience on development machine I suggest you to set up file changes watch mask. Since Python engine does not reload modified files automatically, this will restart Python every time when any *.py file is modified in web site folder. Please add the following environment variable to web application: WATCH_FILE_CHANGES_MASK=*.py
We will follow Tutorial #1 from Django documentation. We’ll skip chapters explaining Python server configuration as we already have it running and go to Database setup chapter. We are going to use SQLite3 as a test database. So, open project\project\settings.py file in the web site folder and modify DATABASES section as follows:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'DEV_DB.sqlite3', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': '',
'PASSWORD': '',
'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
Then start IDE or cmd.exe using Helicon Zoo Manager. We will need a command line interface. First move to the ‘project’ folder:
cd project
Run sincdb command to create database structures:
python manage.py syncdb
And create ‘polls’ application inside the project:
python manage.py startapp polls
Modify project\polls\models.py as follows:
from django.db import models
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
Edit project\project\settings.py again and add ‘polls’ to INSTALLED_APPS section:
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls',
)
And run syncdb again:
python manage.py syncdb
Edit polls\views.py:
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world. You're at the poll index.")
Add URL route by creating file polls\urls.py with following content:
from django.conf.urls import patterns, url
from polls import views
urlpatterns = patterns('',
url(r'^$', views.index, name='index')
)
Now let’s connect this route to the main urls.py by editing project\urls.py as follows:
from django.conf.urls import patterns, include, url
# Uncomment the next two lines to enable the admin:
# from django.contrib import admin
# admin.autodiscover()
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'project.views.home', name='home'),
# url(r'^project/', include('project.foo.urls')),
# Uncomment the admin/doc line below to enable admin documentation:
# url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the next line to enable the admin:
# url(r'^admin/', include(admin.site.urls)),
url(r'^/?', include('polls.urls')),
)
Now start browser with application’s home page. You should see the “Hello world” response of our application:
We will not guide you through the rest of the Django tutorial as purpose of this article was to prove the concept and show how to configure environments using Helicon Zoo. You can keep up with Django tutorial here.
So, we’ve created the application and now, logically, want to publish it to the Internet. For that purpose we’ll set up Windows server to work with Python in production. We’ll have to repeat several steps from the beginning of the article which were used to install development environment, but this time we’ll configure production IIS server. If you are considering to organize Python hosting on Windows servers you will need to complete exactly the same steps as following:
Now install Python Hosting Package from Zoo –> Packages.
This will install Python 2.7 normally into C:\Python27 folder, Virtualenv, Pip, Flup, Python Imaging Library, MySQL Driver for Python 2.7 and Twisted. All these components are needed for normal Python applications operation, rest of the components will be installed into the application folder itself. Installing Python Hosting Package will also set correct NTFS permissions for Python installation folders for default IIS Application pool settings. You can update Python installation packages manually then, leaving installation folders as they are, because these folders are hardcoded in some WebPI components. But remember – installation packages form Zoo repository has been tested with Zoo while you are upgrading on your own risk. Python Hosting Package may also install IIS and other Windows components as dependency if they are not already installed in the system. Eventually it will install Helicon Zoo Module which is required to host these applications with IIS.
After these steps are completed the server is ready to host our application. For the moment the following server platforms are supported: Windows Server 2008 and 2008 R2 and Windows 2012, all 32-and 64-bit versions when applicable. The reason why older versions are unavailable is because Helicon Zoo Module uses native IIS 7 API, therefore everything prior IIS version 7 is unsupported and all newer versions of IIS should be fine.
So, at first, we create an empty web-site via IIS manager or your hosting panel. Then simply upload entire web site folder with your application to the server via FTP or Web Deploy or any other way. I would recommend to configure Web Deploy on the production server. This tool makes deployment of applications from WebMatrix or Visual Studio really easy, plus all application folders and files will be given proper permissions automatically, as they’ve been set by Helicon’s Python project template. Generally you may need to enable write permissions for entire web site folder for the user running application because Python application will want to write things sometimes. You can also use Git or any other version control system and deployment system, but that falls beyond the scope of this article, same as write permissions fine tuning.
Then, in general, you just navigate to the web site and it opens. The application will be executed on Windows Server under IIS with the help of Helicon Zoo Module. This module was initially designed as a hosting solution, so all applications are isolated and do not interfere. The module with its default options works in fully automatic mode, creating one worker process per application by default, when the load is low and increasing the number of workers up to the number of CPU cores of the server providing maximal performance under heavy loads. These settings can be changed on per-engine level using Helicon Zoo Manager.
Helicon Zoo implements the concept of engines and applications. The engines define how to run an application, which interpreter to use, which protocol and port, what maximum number of workers are allowed and other global settings which are defined in applicationHost.config, so if the user has no write permissions to applicationHost.config then he can’t change engine settings. Then, Helicon Zoo application ‘uses’ the engine by referencing it from web.config file from inside web site folder. Each engine may have list of parameters – Environment Variables, which users can set in web.config files located in application folders. This concept allows for separation of hosting administrator duties from the clients’ burden (and client from client as well). You can learn more about Helicon Zoo Module configuration here.
Sometimes, when your application is not just an empty Hello World as in our example, simply copying files to the production environment is not enough. For example, your application may use external database server and you may need to execute database migration tasks in production environment before the new code could be executed. For this purpose Helicon Zoo offers very convenient tool called deploy scripts. Please notice DEPLOY_FILE=”deploy.py” environment variable in the Helicon Python project template. This variable means that every time when Helicon Zoo engine finds deploy.py file in the root of the web site it will do the following:
Here is how Application deployment in progress message looks by default:
In Python project template there is already deploy.py file. You may have noticed it’s execution on the first application start. During this first execution this deploy script checks for presence of virtualenv in the folder and configures a new virtualenv if no existing found. Additionally this file contains common deployment instructions for database migrations, modules installation using requirements.txt file, etc. So if you exclude /venv/ folder when uploading your application to the production server, it will be recreated during next deployment process and all requirements will be installed.
To initiate deployment process you will only need to rename the deploy_done.py file into deploy.py and push it to server. You may want to make it your last change to the project when you upload changes so to make sure all other scripts and files has been updated before deployment process initiates. If the WATCH_FILE_CHANGES_MASK environment variable is not set, Python will not load updated code files unless restarted, so initiating deployment process will do all things synchronously – migrate database, install requirements and then load new code into engines. This is so-called “cold application maintenance” which is needed sometimes because if other user requests (with either new or old code) will be running while database migrations and other deployment tasks are executed the data could be corrupted or user could get unpredicted responses. Helicon Zoo minimizes application downtime to even seconds and automates whole deployment process so it becomes easy to deploy large applications across array of servers using simple techniques.
Please read more about deployment scripts in Helicon Zoo documentation.
And now performance test for dessert. Testing machine acting as server was Core 2 Quad 2.4 GHz, 8 Gb RAM, Gigabit LAN. To generate load we used more powerful PC with Apache Benchmark. To measure Apache and Nginx performance Ubunthu 11.04 Server x64 was used. IIS 7 tests ran on Windows Server 2008 R2. No virtual machines – only bare hardware. As transport on Nginx we used the most advanced uwsgi, as well as wsgi and fast_cgi for comparison. On IIS 7 we’ve also compared with PyISAPIе.
There where two Django scripts created as testing pages. The first one outputs current time in high resolution; this is done to ensure pages are not taken from cache. The second one does the same but previously saves the result into database. It’s all done using templates in order to apply real Django infrastructure; DB used is MySQL. All settings were left default as the task was to test the most common configurations. Here are results (in requests per second):
No surprises here as Python performance on Windows may be slower than Ubunthu version. Taking this into consideration Helicon Zoo transport performance should be really high. Uwsgi is ahead probably due to closer integration with Django.
The results for the second script are not that smooth. Why Nginx + fcgi + MySQL showed only 175 requests per second remains unknown. MySQL on Windows score is also frustrating, although on shared hosting the problem might not be that critical. The thing is that performance drops due to internal MySQL locks, while the server is not even loaded for 20% while generating these 104 requests per second. It’s reasonable to assume that by increasing number of sites on the server and consequently the number of DBs, if they do not interlock with each other, the total server performance will be acceptable.
Thus we decided to add MS SQL Express into the tests. The result was easy to explain with Python and its database driver being the bottleneck, though in general the picture is quite promising. Unfortunately PyISAPIe was unable to work with MS SQL Express and was excluded from tests.
It is worth to mention the ability of IIS 7 to handle great number of connections. IIS 7 + Helicon Zoo easily held thousands of concurrent connections, we simply didn’t have testing powers to generate enough connections to trigger any problems in this test. Ubuntu with default settings started throwing connection failures when the number of connections increased. Moreover, Apache appeared to be greedy for memory. During the test with the number of connections going up Apache swallowed about 3 GB in 20 seconds.
]]>
The steps to be accomplished are:
In the root of your site you have a downloads folder containing files for download (e.g., AudioCoder.msi, AudioDecoder.msi, AdditionalCodes.zip).
You have Helicon Ape installed on your Windows Server 2008 (IIS7).
You have SQL Server that will store the DB with downloads statistics.
Run Microsoft SQL Server Management Studio and connect to your SQL server.
Right click on ‘Databases’, select ‘Create Database’, enter database name (for example ‘DownloadsCounter’) and click ‘OK’.
Now we’ll create the table itself:
Unfold the DownloadsCounter in Object Explorer, right click on ‘Tables’ and select ‘New Table…’. Name the table ‘Downloads’ and add 4 fields: id (int), moment (datetime), filename (nvarchar(50)) and ipaddress (nvarchar(50)) as shown on the pic below. You can find sql script to create table in attached DownloadsCounterExample.zip.
Done with the table!
Start Helicon Ape Manager, select your site from the tree and click on ‘downloads’ folder. Now write the following config in .htaccess on the right.
The config instructs mod_dbd to connect to DB, catches data from request (filename and client ip-address) and writes them into the DB.
# Helicon Ape version 3.0.0.59
# Connection settings
DBDriver mssql
DBDParams "Data Source=db2003\MSSQLSERVER2008;Initial Catalog=DownloadsCounter;\
User ID=sa;Password=123123"
# Save Filename (for .msi & .zip files only)
SetEnvIfNoCase REQUEST_URI ^/downloads/(.+\.(?:msi|zip))$ FileNameENV=$1
SetEnvIfNoCase REMOTE_ADDR ^(.*)$ IpAddrENV=$1
# Sql query to save download event
DBDPrepareSQL "INSERT INTO DownloadsCounter.dbo.Downloads\
(moment, filename, ipaddress)\
VALUES (\
GETDATE(),\
'%{FileNameENV}e',\
'%{IpAddrENV}e'\
)\
" InsertDownload
# Execute sql query if request uri is .msi or .zip file
SetEnvIf request_uri \.(?:msi|zip)$ dbd_execute=InsertDownload
# limit access to statistics page to localhost only
<Files stat.aspx>
Order Deny,Allow
Deny from all
Allow from ::1 127.0.0.1 localhost
</Files>
Save your .htaccess and try to download something from the browser. If you now throw a glimpse at the ‘downloads’ table in SQL Server Management Studio, you’ll see the records appearing in it:
Helicon Ape module is processing all requests coming to /downloads/ folder. If .zip or .msi file is requested, Ape memorizes the filename (FileNameENV) and client IP (IpAddrEnv), from which the file was requested, and inserts these data into the table by means of SQL query.
The archive attached to the article includes stat.aspx which does a very simple task—shows records from ‘downloads’ table.
Of course, that’s not the sort of viewer you need, as it must be capable of showing statistics by days, months, products etc. But designing a real-life solution is beyond the scope of this article, so feel free to advance stat.aspx by yourself.
Below is the archive DownloadsCounterExample.zip containing all files from this article. To do some testing just unzip it into the root of your site and replace the database password from ‘123123’ to the one you have in both .htaccess and web.config.
As we already noticed, Google Analytics and like services are based on JavaScript working on the page. They know nothing about static content and other files loaded from the server.
Another option (yet another extreme) is server logs. They are not always accessible, not always enabled, besides, they require writing of special parsers or analyzers. Moreover, server logs do not provide live (current moment) info. Usually log file is created daily so their analysis is only possible at the beginning of the next day.
The method explained above is a quick and effortless way to screw downloads counter to your site using Helicon Ape mod_dbd. The example is easily expendable, e.g. you can add more fields to the table to save Referer, User-Agent, etc., or develop a customizable statistics viewer.
We’ve just given you the basement, now it’s time for you to build a house.
Best regards,
Ruslan—Helicon Tech Team
Caching static content (pictures, css files, javascript files) on the client’s side (in browser) means that having received static file once browser saves it in cache and doesn’t make a request to the server next time the html-document is requested. File will be taken from cache. Both sides win: client sends less requests, web site is working faster and server processes less requests. For instance, ordinary WordPress post page has over a dozen links to the static files (css files, pictures, scripts). Time spent on downloading these files exceeds time spent on downloading the post itself. Once having caching enabled the static content will be downloaded only once. While moving to the next page the only thing that will be downloaded is page itself. All static files will be taken from cache. In order to make browser cache static content, http-response must contain specific headers: Expires and Cache-Control. Those headers are set by mod_expires and mod_headers modules. For enabling caching, create .htacces file with the following content inside the static folder:
ExpiresActive On
Header set Cache-Control public
ExpiresByType image/.+ "access 15 days"
ExpiresByType text/css "access 5 days"
ExpiresByType application/x-javascript "access 5 days"
ExpiresByType application/javascript "access 5 days"
In case there’s no such directory for static content and files are spread across folders of web site, than if you create following .htacces in the root of the site it will cache all static content on the web site by file extension:
<Files ~ \.(gif|png|jpg|css|js)>
ExpiresActive On
Header set Cache-Control public
ExpiresByType image/.+ "access 15 days"
ExpiresByType text/css "access 5 days"
ExpiresByType application/x-javascript "access 5 days"
ExpiresByType application/javascript "access 5 days"
</Files>
This configuration makes server send http-responses to clients with information that pictures are to be cached for 15 days and scripts and css-files for 5 days.
In order to save some time on loading the content, you can compress it. All modern browsers are able to receive comressed gzip-traffic. Text files (html-files, css-files, scripts, json-data) can be easily compressed and allow you to save 20-90% of traffic. Same time, music and video files can hardly be compressed as they have already be sized with special codecs. Here’s an example of enabling gzip-compression. Add the following line in .htaccess in the root of web site:
SetEnvIf (mime text/.*) or (mime application/x-javascript) gzip=9
As you can see, this configuration is quite simple. It’s enough to have all text documents (html, css files) and javascript-files compressed before going to the client’s side. It is worth saying, that server compresses responses only for those browsers, that support compressing. Browser informs server about its features through the headers of html-request.
Often large amount of requests, addressed to database server, hinder the web site performance. For example, blog’s main page shows recent entries, recent comments, navigation menu, category list and tags. Those are several complicated requests to database. In case that information does not change often or the relevance is not vital, html-responses need to be cached without hesitation. You can choose to cache the blog’s main page once in 5-10 minutes. But that would be enough to improve main page performance in browser. Practically, application developer must decide what pages need to be cached and for how long. Also he needs to bring into life caching mechanism “out of the box” . Unfortunatelly, that doesn’t happen most of the time. Likely, mod_cache in Helicon Ape will simply and easily allow you to enable caching at server side. mod_cache supports two types of cache: disk cache and memory cache. First type saves caches data on the drive, and the second one does on memory. Memory caching is more preferable. If your server doesn’t have enough RAM, use disk cache. For example, to cache site’s homepage, we need to add the following lines in .htaccess in the root:
Header set Cache-Control public,max-age=600
SetEnvIf request_uri ^/$ cache-enable=mem
This configuration enforces caching of site’s homepage request for 10 min (600sec). Response are cached in memory. Be careful! You need to enable caching carefully. For example, pages that need authentificaton mustn’t be cached as they contain private data and need to provide different information for different users. In any cases, caching must be taking application logic into account. We’ve reviewed three simple steps for increasing the speed of your web site. Besides tangible speed-boost, which you will notice at once, the acceleration must well enhance your rating in search engine results. You can see performance graph of www.helicontech.com made using Google Webmaster tools after a simple optimization. So equip your site with these tricks and enjoy dual benefit!
]]>
In the simplest case there are only two request processing contexts:
Server context is executed first and after that, if further processing is allowed (no redirect or proxy happened), root folder config is processed (if present).
Picture 1. Server-wide configuration (httpd.conf)
Picture 2. Per-site configuration (.htaccess)
Picture 3. Processing order for the configs on Pictures 1 and 2
***
Now let’s make it more complicated—we’ll have the rules in the root folder and in Directory1. The processing order then becomes:
DirectiveA
DirectiveB
Picture 4. Processing order in case of several .htaccess files
For the request to http://localhost/index.html only the first context is applied, while for http://localhost/Directory1/index.html (and other requests to deeper subfolders) the merged context 1+2 is executed. In our case it’s:
DirectiveA
DirectiveB
Thus, the child context complements and refines the parent one (but not the server one). This is true for nearly all Apache/Ape modules EXCEPT mod_rewrite. It’s one of a kind and behaves differently.
Historically, or for convenience purposes, mod_rewrite contexts do not complement but COMPLETELY OVERRIDE each other. So, if we have two configs
RewriteRule a b
RewriteRule b с
the resulting config to be applied to the request will be
RewriteRule b c
and NOT
RewriteRule a b
RewriteRule b с
which may be unobvious for newbies.
For experts! mod_rewrite has an option allowing to change this behavior and inherit the parent rules:
- /.htaccess
RewriteRule a b
- /Directory1/.htaccess
# inherit parent rules RewriteOptions inherit RewriteRule b с
makes up the following merged config:
RewriteRule b с # parent rules are appended to the end of the merged config! RewriteRule a b
<Directory> section is equivalent in meaning to writing rules in the .htaccess located inside this directory. The only difference is that <Directory> lives in httpd.conf.
If there are both <Directory> section and .htaccess for the same directory, they are merged; if the directives inside them interfere, the .htaccess directives are preferred.
Picture 5. Processing order when there are both .htaccess and <Directory> for the same location
Let’s see how the configs are merged for the request to http://localhost/Directory1/Directory2/index.html if each directory has both <Directory> section and corresponding .htaccess file.
Picture 6. Processing order when there are several .htaccess files and several <Directory> sections which are applicable for the same request
httpd.conf
<Directory C:/inetpub/wwwroot/>
DirectiveDirectoryA
</Directory>
<Directory C:/inetpub/wwwroot/Directory1/>
DirectiveDirectoryB
</Directory>
<Directory C:/inetpub/wwwroot/Directory1/Directory2/>
DirectiveDirectoryС
</Directory>
/.htaccess
DirectiveA
/Directory1/.htaccess
DirectiveB
/Directory1/Directory2/.htaccess
DirectiveC
The following logics is applied to form the merged config:
The resulting sequence of directives will be:
DirectiveDirectoryA
DirectiveA
DirectiveDirectoryB
DirectiveB
DirectiveDirectoryC
DirectiveC
Usually directives’ order is not so important, but not in case with mod_rewrite; that’s why understanding the principles of configs merging may dramatically reduce development and debugging times.
Note! <DirectoryMatch> sections are applied not to all parts of the request (see above) but only to the deepest part, and all matches are searched for, for example, if there are two sections:
<DirectoryMatch C:/inetpub/wwwroot/Directory1/Directory*/>
and
<DirectoryMatch C:/inetpub/wwwroot/Directory1/*/>
then both of them get into the merged config.
One should remember that everything written inside server context is applied to all requests and for all sites. Sometimes it may be necessary to limit the scope of directive to one or several sites and that’s the case to use <VirtualHost> section.
<VirtualHost> can reside in httpd.conf only. It is merged with server config, i.e. complements it. In case of both .htaccess and <VirtualHost> section for the specific location, the latter has higher priority and can reject server settings for the specific site (in our case localhost).
#httpd.conf
ServerDirective
<VirtualHost localhost>
VirtualHostDirectiveA
</VirtualHost>
Note! mod_rewrite offers another way to restrict scope for the rules to specific host – RewriteCond %{HTTP_HOST}.
The difference is that RewriteCond %{HTTP_HOST} must appear before each RewriteRule, while <VirtualHost localhost> groups all rules for localhost together and affects all of them. Compare:
RewriteCond %{HTTP_HOST} localhost
RewriteRule . index.php [L]
RewriteCond %{HTTP_HOST} localhost
RewriteRule about$ about.php [L]
and
<VirtualHost localhost>
RewriteRule . index.php [L]
RewriteRule about$ about.php [L]
</VirtualHost>
On the other hand, the limitation of <VirtualHost> is that it can’t be used in .htaccess.
Picture 7. Processing order when <VirtualHost> section is present in httpd.conf
Note! <VirtualHost> sections are NOT merged together – if there are several <VirtualHost>s matching the request, the one with the best match is applied. E.g.:
<VirtualHost localhost> <VirtualHost localhost:80> <VirtualHost *>
For request to localhost:80/page.html the second line will be executed, whereas for localhost/page.html the first one will fire.
If <Directory> section is specified inside <VirtualHost> (which is possible), the processing order is as follows: <Directory> section of the main server config is accounted first, then <Directory> inside <VirtualHost> and after all – .htaccess.
Thus, the use of <Directory> section outside <VirtualHost> will lead to application of its (<Directory>) rules to all sites (in case they use this shared folder).
Picture 8. Processing order when there are <VirtualHost> and <Directory> sections as well as .htaccess
These two behave similar to <DirectoryMatch> but are used for file names, not for the full path.
E.g., for http://localhost/Directory1/index.html#top they will find the correspondence in file system C:\inetpub\wwwroot\Directory1\index.html and will merge all <FilesMatch> sections valid for this file name (e.g. <FilesMatch *.html> and <FilesMatch index.*> will be merged).
Note! <Files> and <FilesMatch> may reside in .htaccess as well!
Are applied to the corresponding virtual path, which for http://localhost/Directory1/index.html#top is /Directory1/index.html.
Let’s now put it all together. Here’s the final sequence of sections:
***
Seems every aspect of configs processing has been covered. We understand that this article may look somewhat sophisticated, but we are sure there are enthusiasts who’ll find it helpful.
Web interface illustrates the current state of load balancers and their nodes.
The following info is shown for the balancer nodes:
Here’s how you can set this handler to enjoy all this stuff:
<Location /balancer-manager/>
SetHandler balancer-manager
Order allow,deny
Allow from 127.0.0.7 ::1 localhost
</Location>
Please pay attention that the URL to which the handler is mapped must be secured from unauthorized access. For instance, the access must be granted for local machine only (see example above) or basic/digest authorization must be enabled.
Feel free to try our web interface for the load balancer to facilitate control and get comprehensible statistics for any node and any balancer.
Best wishes,
Ruslan – Helicon Tech Team
as well as Ape-specific details:
With new mod_developer debugging of mod_rewrite rules is faster and easier as now you can see the hierarchy of subrequests invoked by RewriteRule.
To show you mod_developer in action we’ve created a special demo site. It’s a WordPress blog working with Ape and having mod_developer enabled for all visitors.
To touch live mod_developer follow the link: http://moddeveloper.helicontech.com/?ape_debug=secure-key-1231234
Note! Debugging security is ensured by mod_developer
environment variable. This variable stores unique secure key which only you should know. To start debugging simply request http://yoursite.com/path/to/debug/?ape_debug=secure-key-XXXXXXX.
Debuggung may also be enabled form Ape Manager (Options → Start Ape Debugger). In this case secure key is generated automatically and is pasted into your current configuration.
Please feel free to test our mod_developer and post your impressions in comments.
Best wishes,
Helicon Tech Team