Apparently your ubuntu can do blue screen too! Here is how to fix blue screen in ubuntu

I did not know that one can have blue screen in ubuntu as well. But if you know it already that don’t panic, this is how to fix blue screen in ubuntu. It is probably happened because of problem in upgrading distro.


$ sudo su
$ sudo apt-get install libgdk-pixbuf2.0-dev
$ cd /usr/lib/x86_64-linux-gnu/gdk-pixbuf-2.0/
$ find 2.10.0/loaders/ > ~/pixbuf-files
$ nano ~/pixbuf-files
# and delete 1st line 2.10.0/loaders/

$ cat ~/pixbuf-files | xargs -n1 gdk-pixbuf-query-loaders > 2.10.0/loaders.cache
$ reboot

Activeadmin semantic input partial and f.inputs togather

in activeadmin form when I do:

  form(:html => { :multipart => true })  do |f|
    f.inputs
  end

it shows all the fields nicely, and when it comes to belongs_to field it shows them as a collection but I want to replace this collection with a template of my belongs_to field.
Now if I want to use custom belongs_to field I can’t use the power of f.inputs because it will cause repetition. So what can we do now?

Well, currently using this as a solution:
at my “helper/active_admin_helper”:

	def form_inputs_for(m)
		columns=m.columns.map{|c| c.name}
		columns=columns.select{|s| !(s.end_with?"_id" or s.end_with?"_at" or s=="id") }
		columns.each do  |a|
        	input a
      	end
    end

and at form:

ActiveAdmin.register ModelClass do
  require "helper/active_admin_helper"
      form(:html => { :multipart => true })  do |f| 
        f.inputs do
          render "admin/product", f:f
          form_inputs_for(ModelClass)
        end
      end
end

Let me know if you have better solution, in fact I am still looking for one…

RabbitMQ & celery demo using image processing app on flask

Celery is a distributed system for processing messages on a task queue with a focus on real-time processing and support for task scheduling. When we have to run an expensive function that keeps user waiting for like “forever”, it is always better to use something like celery. In this blog we will be writing a face detection web app using flask, python-opencv and celery.

Before I can tell something, let me share a flask code snippet with you:

from time import sleep

@app.route("/")
def hello():
    sleep(10) # <---what would you see in this 10s?
    return "Hello World!"

Can you tell me what would you see in first 10s while we run our flask app? I know the answer, before getting the response it will keep the user waiting for 10s. We don’t love to wait 10s. We are so impatient, we want everything instantly thats the motivation that we have in modern computing. But life is cruel, we can’t get everything instantly, we understand that but our users DO NOT understand this simple truth. So we do what, we will try to sell them a feeling that we are working instantly, at least it is not taking forever to load. So we need to get over from that sleep block. How would we do that, that’s what I am going to discuss in this blog with a real life image processing app in flask.

Obviously in life we don’t need to write “sleep” to make our code run slower. We had to write plenty of function that makes our life slower. In this blog we will discuss we will be writing an application that enables user to upload a picture and we will help them to detect faces. So what is the function we have this face detection function which is very expensive. It takes almost 3-10s on my machine to detect the face of my favourite actress. Let me share my code:

#server.py

__author__ = 'sadaf2605'


import os
from flask import Flask, request, redirect, url_for
from werkzeug import secure_filename

import face_detect
from os.path import basename


UPLOAD_FOLDER = '/home/sadaf2605/flask_celery_upload_image/uploads'
ALLOWED_EXTENSIONS = set(['txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'])

app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER


def allowed_file(filename):
    return '.' in filename and 
           filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS

@app.route('/', methods=['GET', 'POST'])
def upload_file():
    if request.method == 'POST':
        import time
        start_time = time.time()
        file = request.files['file']

        if file and allowed_file(file.filename):
            filename = secure_filename(file.filename)

            base,ext=os.path.splitext(filename)


            file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
            face_detect.detect(os.path.join(app.config['UPLOAD_FOLDER'], filename),os.path.join(app.config['UPLOAD_FOLDER'], base+"-face"+ext))

            print "--- %s seconds ---" % str (time.time() - start_time)
            return redirect("/")
            return redirect(url_for('uploaded_file',
                                    filename="facedetect-"+filename))

    from os import listdir
    from os.path import isfile, join
    htmlpic=""
    for f in sorted(listdir(UPLOAD_FOLDER)):
        if isfile(join(UPLOAD_FOLDER,f)):
            print f
            htmlpic+="""
            
                
            
                """

    return '''
    
    

    
    Upload new File
    

Upload new File

'''+htmlpic from flask import send_from_directory @app.route('/uploads/') def uploaded_file(filename): return send_from_directory(app.config['UPLOAD_FOLDER'], filename) from werkzeug import SharedDataMiddleware app.add_url_rule('/uploads/', 'uploaded_file', build_only=True) app.wsgi_app = SharedDataMiddleware(app.wsgi_app, { '/uploads': app.config['UPLOAD_FOLDER'] }) if __name__ == "__main__": app.debug=True app.run()

#face_detect.py

import numpy as np
import cv2


face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

@app.task
def detect(src_img,dest_img):
    img = cv2.imread(src_img)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    faces = face_cascade.detectMultiScale(gray, 1.3, 1)
    for (x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),5)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = img[y:y+h, x:x+w]


    cv2.imwrite(dest_img, img)

You can test this app running using

python server.py

But we don’t want our user to wait 10s to see the next page. So we will use celery and rabbitmq to help us. First of all lets install RABBITMQ and CELERY.

To install rabbitMq we will use aptitude because it installed all its dependency in the way, if you don’t have aptitude installed then:

sudo apt-get install aptitude

Now its time to install rabbitmq server

 sudo aptitude install rabbitmq-server

Now we will create user and server for rabbitmq.

sudo rabbitmqctl add_user rabbit_user password
sudo rabbitmqctl add_vhost /app_rabbit

we will set permission for our user to do everything

sudo rabbitmqctl set_permissions -p /app_rabbit rabbit_user ".*" ".*" ".*"

Now we need to restart rabbit server, so that the change gets implemented

sudo /etc/init.d/rabbitmq-server stop
sudo /etc/init.d/rabbitmq-server start

Now we will install celery:

pip install celery

Now we need to configure celery, and celery provides few decorator functions like @tasks to achieve our goal. Rabbitmq is default for celery. Now we need to know that celery communicate via broker url using a different port. We want to enqueue our image processing tasks, so we can define it in face_ditect.py but it will be better if we can put it in our server.py as it is the entry point… but whatever for now!

import numpy as np
import cv2

from celery import Celery

app= Celery(broker='amqp://rabbit_user:password@localhost:5672//app_rabbit' )


face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

@app.task
def detect(src_img,dest_img):
    img = cv2.imread(src_img)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    faces = face_cascade.detectMultiScale(gray, 1.3, 1)
    for (x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),5)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = img[y:y+h, x:x+w]


    cv2.imwrite(dest_img, img)

Now it won’t change your life radically, because we are not using the decorator functions that celery provided us. To put that task in queue we need use delay function of decorator function. so we need to call detec_image.delay(src_img,dest_image) and we actually need to keep our celery server running other wise it will only put it in queue and wait for the server to run. In -A parameter of celery we need to mention which file the decorator functions are located.

To run celery server

celery worker -A detect_face -l INFO

So now finally we can change our server.py

__author__ = 'sadaf2605'


import os
from flask import Flask, request, redirect, url_for
from werkzeug import secure_filename

import face_detect
from os.path import basename


UPLOAD_FOLDER = '/home/sadaf2605/flask_celery_upload_image/uploads'
ALLOWED_EXTENSIONS = set(['txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'])

app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER


def allowed_file(filename):
    return '.' in filename and 
           filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS

@app.route('/', methods=['GET', 'POST'])
def upload_file():
    if request.method == 'POST':
        import time
        start_time = time.time()
        file = request.files['file']

        if file and allowed_file(file.filename):
            filename = secure_filename(file.filename)

            base,ext=os.path.splitext(filename)


            file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
            face_detect.detect.delay(os.path.join(app.config['UPLOAD_FOLDER'], filename),os.path.join(app.config['UPLOAD_FOLDER'], base+"-face"+ext))

            print "--- %s seconds ---" % str (time.time() - start_time)
            return redirect("/")
            return redirect(url_for('uploaded_file',
                                    filename="facedetect-"+filename))

    from os import listdir
    from os.path import isfile, join
    htmlpic=""
    for f in sorted(listdir(UPLOAD_FOLDER)):
        if isfile(join(UPLOAD_FOLDER,f)):
            print f
            htmlpic+="""
            
                
            
                """

    return '''
    
    

    
    Upload new File
    

Upload new File

'''+htmlpic from flask import send_from_directory @app.route('/uploads/') def uploaded_file(filename): return send_from_directory(app.config['UPLOAD_FOLDER'], filename) from werkzeug import SharedDataMiddleware app.add_url_rule('/uploads/', 'uploaded_file', build_only=True) app.wsgi_app = SharedDataMiddleware(app.wsgi_app, { '/uploads': app.config['UPLOAD_FOLDER'] }) if __name__ == "__main__": app.debug=True app.run()

So after uploading now in the front page the picture won’t show up you will need to refresh it couple of time after 5-6s to show up. Keep on refreshing and you may like to send me a pull request at: https://github.com/sadaf2605/facedetection-flaskwebapp-rabbitmq

Testing rails app with RSpec, FactoryGirl, Capybara, Selenium & Headless

In this blog I don’t have any intention to argue or discuss the necessity of Test Driven Development (TDD), but indeed without any doubt I believe testing is necessary. The other day I was watching a talk by Kent Beck where he said “if you don’t find Testing necessary for your code, then don’t do that. Whats the point in arguing about it?” I am also believer of that, as he said testing is all about gaining confidence. Now in this blog I am putting it together how did I use Rspec , Capybara , Selenium to gain this “confidence”.

RSpec has a different repository for rails (https://github.com/rspec/rspec-rails) and a different gem. To put RSpec in project we need to put gem rspec-rails in Gemfilelike this:

group :development,:test do gem
'rspec-rails', '~> 3.0'
end

then do a:

bundler install

Now we need to run a generator command:

rails generate rspec:install

Which basically generates following files:

spec/spec_helper.rb
spec/rails_helper.rb

Later in this blog we will configure it. Now we will write some RSpec tests. RSpec tests can be for many things and of many type. It can be for models, it can be for controllers, it can be for features and so on. There are generator commands to help you creating files.

When I run:

rails generate rspec:features home_page

It creates spec/features/home_page_spec.rb. As you see, every rspec files end with . So when we run bundle exec rspec _spec.rb it finds all these files and test them, we can also run:
bundle exec rspec spec/features

Which will only run feature test. We can also do:

bundle exec rspec spec/features/home_page_spec.rb:2

which will run test of that line until it ends. Now time to write some tests. Before writing test one thing we always need to keep in mind, in every test case we will test only one feature and we will focus only on that feature. (Maybe in later blog I will describe mock and stubs two cool features). If we don’t focus on one feature then it will take more time to debug our test case then writing the patch.

In our RSpec block we will be using describe and it methods. its are the functionality tests and a describe can contain a lot of its. A describe block can hold bunch of it blocks. In another word given/let calls are used at the top of a feature/describe/contextblock and apply to all contained feature/describe/context or scenario/it blocks.

in

spec/features/home_page_spec.rb
require 'rails_helper'
RSpec.feature "home page", type: :feature do
describe "navigation" do
it " shows me menu" do
get "/"
#do testing
assert_select ".navbar", :text => "home"
end
end
it "shows title" # it shows pending because it does not have do block
end

There are alias methods, and other way to write, for an instance feature is an alias for describe …, :type => :feature , background is an alias for before , scenario for it , and given/given! aliases for let/let! , respectively and so on.

We can see that RSpec by default deals with requests like get , post , it is not a browser a user will use see the product. But the main point is it does not have js support and modern website is full of javascripts. So what can I do? We will be using Capybara to mimic browser. We will add following gem to our Gemfile before doing

bundler install

:

gem 'capybara'

Now it is time to re write our test code. We need to add require ‘capybara/rails’ in our spec file.

require 'capybara/rails'
require 'rails_helper'
RSpec.feature "home page", type: :feature do
describe "navigation" do
it " shows me menu" do
visit "/"
expect(page.find('.nav-bar')).to have_content("brand") # page find search for css path
end
end
end

A Capybara cheat sheet which I found to be very useful can be found and here : https://gist.github.com/zhengjia/428105

Now we will integrate:

gem "FactoryGirl"

FactoryGirl is awesome tool to automate model entry creation. Suppose we have a BlogPost model which has many Catogories and one Blogger . In real life this is just a simple model will look like this when we write test case:

blog_post=BlogPost.new({title:"this is title",body:"this is body"})
blog_post.category<<Category.create(name:"technology")
blog_post.blogger=User.create(username="sadaf",password="password",rewritepassword="password")
blog_post.save()

For each post we will need to write this many line of code, and every post will need to have hand written title and body which is tiresome. FactoryGirl solves this problem. We will define Factories in support/factories.rb :

FactoryGirl.define do #active admin factory :admin_user, :class => AdminUser do
sequence(:email){|n| "email#{n}@example.com"}
password "password"
password_confirmation "password"
end
factory :category, :class => Category do
name "category name"
end
factory :blog_post, :class => BlogPost do
association :blogger, factory: :admin_user
sequence(:title){|n| "this is super cool title #{n}"}
sequence(:title){|n| "this is" + " super duper " * n + " body."}
description "static description"

factory :blog_post_with_category do
after(:create) do |post|
create(:category, blog_post: post)
end
end
end
end

so to create AdminUser , using factory_girl we will do:

FactoryGirl.create(:admin_user)
1
FactoryGirl.create(:admin_user)
Every time we call above code it creates different email address, email1@example.com, email2@example.com .., but password remains the same for all cases as it is not in sequence block. BlogPost has a association. and when we will do FactoryGirl.create(:blog_post) it will also create admin_user to satisfy its need, so we don’t need to write it separately. And to create blog_post with category we will need to do following:

blog_post = create(:blog_post_with_category)

Life is pretty easy right now.

....
it " shows me sub-menu on parent-menu click" do
visit "/"
blog_post = create(:blog_post_with_category
expect(page).to have_content(blog_post.title)
end
...

Now we may like to test our beautiful nested navbar is working or not, which is beautiful because it is manipulated by javascript. To do that we need to tag our it or describe block with “js”, which is very simple just add js:true as it or describes parameter and it is done. But before we write our test we will need to add another gem for our webdriver.

gem 'selenium-webdriver'

Now we can write:

describe "navigation" do
....
it " shows me sub-menu on parent-menu click", js:true do
visit "/"
click("parent menu") #text of button
expect(page).not_to have_selector('.submenu',visible:true) # if visible or not
expect(page.find('.submenu')).to have_content("submenu")
end
...
end

So far we run bundle exec rspec we did not see any browser, but now we will see a browser popping up and popping down. Which can be annoying sometime. To solve this either we can use webkit instead of seleniumbut in this blog I will use another gem called headless which turns selenium into a ui less browser which is super cool.

gem 'headless'

then we need to install Xvfb which is as easy as typing

sudo apt-get install Xvfb

Then we need to modify my spec_helper.rb . We need to add following lines at the beginning of the file:

if ENV['HEADLESS'] == 'true'
# require 'rubygems'
require 'headless' # if face load error read below
headless = Headless.new
headless.start
at_exit do
exit_status = $!.status if $!.is_a?(SystemExit)
headless.destroy
exit exit_status if exit_status
end
end

This code is responsible to instantiate headless when we type: ENV=HEADLESS bundle exec rspec and to destroy it. Then in between configure block we need to put before and after blocks which changes browser to selenium for js:

RSpec.configure do |config|
....
config.before(:each) do |example|
if example.metadata[:type] == :request and example.metadata[:js]
Capybara.current_driver = :selenium
end
end
config.after(:each) do |example|
Capybara.use_default_driver
end
....
end

If you see problem loading headless gem.

LoadError: cannot load such file -- headless
from /usr/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /usr/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55:in `require'

Then you can make headless gem directory executable to all user using chmod:

gem which headless

for me:

var/lib/gems/2.1.0/gems/headless-2.2.0/lib/headless.rb

And then copy the directory of gem before lib and chmod it to 655:

Customize django admin page from the scratch

In my last project with django, I had to modify and extend django admin page massively, there were cases when I had to wipe out curren admin page and create on my own. Usually people don’t go through this kind of trouble I guess, they just write their own model and view rather than admin page. That could possibly be a work around. But I decided not to do that because django admin provides a lot more thing and actually it felt to be the right way to do that. There were no straight forward tutorial that covers all that so here I go, I can guide you a little bit on your way.

 

Usually when we don’t define our own admin page, but instead we use admin of django.contrib.

Example:

from django.contrib import admin
admin.site.register(Question)

Now for our own admin page we will extend AdminSite.

class MySiteAdminSite(admin.AdminSite):
    site_header = 'This is my Custom Admin Site'

It will not come into action unless we add this to url.py:

import admin

urlpatterns = patterns('',

    # (r'^admin/', include(django.contrib.admin.site.urls)),
     url(r'^admin/', include(admin.my_admin_site.urls)),

)+ static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

 

Our admin page is working but there is no model registered to our admin page, so there is nothing to show.

class ArticleAdmin(admin.ModelAdmin):

    fields =('author','title' ,'slug','body', 'categories','cover')
    def change_view(self, request,object_id, form_url='', extra_context=None):
        extra_context = extra_context or {}
        extra_context["show_save_as_draft"] = True

        return super(ArticleAdmin, self).change_view(request,object_id, form_url, extra_context)

    prepopulated_fields = {'slug': ('title',) }

    def save_model(self, request, obj, *kwwargs):
        if not obj.id:
            obj.slug = slugify(obj.title)
            
        if request.user.is_superuser or request.user==obj.author:
            super(ArticleAdmin,self).save_model(request, obj, *kwwargs)
        else:
            obj.author=None
            super(ArticleAdmin,self).save_model(request, obj, *kwwargs)


    def queryset(self, request):
        qs = super(ArticleAdmin, self).queryset(request)

        if request.user.is_superuser:
            return qs
        else:
            return qs.filter(author=request.user)

my_admin_site.register(Article, ArticleAdmin)

I actually wrote quite a lot code at my ArticleAdmin, I am going to explain it now. Basically we override  change_view, save_model and queryset method of modelAdmin class. change_view is responsible for embeding edit or create new form. This is what change_form.html look like:  https://github.com/django/django/blob/master/django/contrib/admin/templates/admin/change_form.html

You can pass your variable via extra_context, for my case I passed show_save_as_draft. and I edited change_form.html and put my edited version of it at template/admin/app/change_form.html

At save model, I actually checked some permission and slugify the link. pretty straight forward.

Now queryset is pretty interesting, it usually shows all the articles, does not it? what if we want that when a certain user logs in you want him to show only his articles? This is exactly what I did with filter.

 

Now we have a working custom admin that shows stuff at its dashboard, we are yet to modify our admin page.

class StripeAdminSite(admin.AdminSite):

    def index(self, request, extra_context=None):
        extra_context = extra_context or {}
        extra_context["site_visited_by"] = 10000


        return super(StripeAdminSite, self).index(request,extra_context)


    def censor_article(self,request,id):
        try:
            article=Article.objects.get(id=id)
            if request.user.is_superuser or article.author==request.user:
                article.censored=True
                article.save()
        except Article.DoesNotExist:
            pass

        from django.http import HttpResponseRedirect
        return HttpResponseRedirect(request.META.get('HTTP_REFERER'))


    def get_urls(self):
        #self.admin_site.admin_view(self.approve_staff)
        urls = super(MyAdminSite, self).get_urls()
        from django.conf.urls import url
        my_urls = [
            url(r'^article/censor/(?P<id>w+)/$', self.censor_article),
        ]

        return my_urls + urls

 

Here, index is what is being displayed when you hit /admin.  Your current index looks like this: https://github.com/sehmaschine/django-grappelli/blob/master/grappelli/templates/admin/index.html

You can override it by putting your own index.html at /template/admin/index.html you can pass variables via extra_context parameter of index function.

I wrote a view fuction censor_article at admin class and added corresponding url at get_urls function. Thats basically it.

One workaroud to pass variables (context) to django admin submit_line.html template

Recently I had to modify django admin page massively, while trying to add a new button at add new model item at admin page I got into trouble, trouble was not to show the button, or get that button working but it was at variable passing. So in this blog I am going to describe, how did I solve it.

I am overriding this template submit_line.html:

{% load i18n admin_urls %}
{% if show_save %}{% if is_popup %}{% else %}{% endif %} {% trans 'Save' %}{% endif %} {% if show_save_as_draft %} {% endif %} {% if show_save_and_add_another %} {% trans 'Save and add another' %}{% endif %} {% if show_save_and_continue %} {% trans 'Save and continue editing' %}{% endif %} {% if show_delete_link %} {% url opts|admin_urlname:'delete' original.pk|admin_urlquote as delete_url %} {% trans "Delete" %} {% endif %}

Here,

{{show_save_as_draft}}

is at out extra context of our ModelAdmin while it is showing:

#/home/sadaf2605/PycharmProjects/stripe/stripe/news/admin.py
class ArticleAdmin(admin.ModelAdmin):
    change_form_template = 'admin/news/change_form.html'
    def change_view(self, request,object_id, form_url='', extra_context=None):
        extra_context = extra_context or {}
        extra_context["show_save_as_draft"] = True
        return super(ArticleAdmin, self).change_view(request,object_id, form_url, extra_context)

Still

{{show_save_as_draft}}

is not showing up. This is the problem

To solve this problem I actually override a template tag that was responsible for showing buttons, basically that template tag was only keeping few selected context field. In this new tag I am keeping tags which are necessary for my app.

#stripe/stripe/news/templatetags/stripe_admin_tag.py
__author__ = 'sadaf2605'
from django import template
register = template.Library()
from django.contrib.admin.templatetags import admin_modify

@register.inclusion_tag('admin/submit_line.html', takes_context=True)
def submit_line_row(context):
    context = context or {}
    ctx= admin_modify.submit_row(context)
    if "show_save_as_draft" in context.keys():
        ctx["show_save_as_draft"] = context["show_save_as_draft"]
    return  ctx

and then finally I need to override change_form.html as well, I need to replace:

{% block submit_buttons_bottom %}{% submit_row %}{% endblock %}

with:

{% load stripe_admin_tag %}
{% block submit_buttons_bottom %}{% submit_ine_row %}{% endblock %}

/stripe/stripe/stripe/templates/admin/news/change_form.html

{% extends "admin/base_site.html" %}
{% load i18n admin_urls admin_static admin_modify %}

{% block extrahead %}{{ block.super }}

{{ media }}
{% endblock %}

{% block extrastyle %}{{ block.super }}{% endblock %}

{% block coltype %}colM{% endblock %}

{% block bodyclass %}{{ block.super }} app-{{ opts.app_label }} model-{{ opts.model_name }} change-form{% endblock %}

{% if not is_popup %}
{% block breadcrumbs %}

{% endblock %}
{% endif %}

{% block content %}
{% block object-tools %} {% if change %}{% if not is_popup %} {% endif %}{% endif %} {% endblock %} {% csrf_token %}{% block form_top %}{% endblock %}
{% if is_popup %}{% endif %} {% if to_field %}{% endif %} {# WP Admin start #} {% if 0 %}{% block submit_buttons_top %}{% submit_row %}{% endblock %}{% endif %} {# WP Admin end #} {% if errors %}

{% if errors|length == 1 %}{% trans "Please correct the error below." %}{% else %}{% trans "Please correct the errors below." %}{% endif %}

{{ adminform.form.non_field_errors }} {% endif %} {% block field_sets %} {% for fieldset in adminform %} {% include "admin/includes/fieldset.html" %} {% endfor %} {% endblock %} {% block after_field_sets %}{% endblock %} {% block inline_field_sets %} {% for inline_admin_formset in inline_admin_formsets %} {% include inline_admin_formset.opts.template %} {% endfor %} {% endblock %} {% block after_related_objects %}{% endblock %} {% load stripe_admin_tag %} {% block submit_buttons_bottom %}{% submit_line_row %}{% endblock %} {% if adminform and add %} (function($) { $(document).ready(function() { $('form#{{ opts.model_name }}_form :input:visible:enabled:first').focus() }); })(django.jQuery); {% endif %} {# JavaScript for prepopulated fields #} {% prepopulated_fields_js %}
{% endblock %}

Ghost methods and meta programming can make your life much much much easier

I have been cleaning up some of my codes today at my a ruby on rails app, I had 10 similar with minor difference methods which kept me thinking why would I write 10 separate methods rather than 1 using ghost methods of ruby? But things are a little bit different when we work on rails that is basically the motivation behind this blog.

Suppose my client started to take orders from its customer to build a custom car, so imagine how many of this following methods we will need to write!!

def add_tire
    @current_build = get_current_build
    Tire = Tire.find(params[:tire_id])
    # todo do stuff
    redirect_to current_build_url
end

def add_engine
    @current_build = get_current_build
    engine = Engine.find(params[:engine_id])
    # todo add methods
    redirect_to current_build_url
end

#....more similar looking add_items methods

Probably a hundred or even the method can reach thousand. But don’t surprise, ruby has its own elegant way to handle this type of repeating. Using ruby support of ghost methods, we can solve this problem. Usually a ghost methods are being handled using method_missing, if you already are not aware of the power of method_missing, then let me tell you in short, method_missing is a method that is being called when a method is being called but it is not defined in the object. So basically when a method is being called in check within it, if not found it checks its parent if that method exists there or not, if not found then it checks its parent too, this check goes all the way to ruby object class as object is the parent of all ruby class.

Now let me show you an example:

class Test
    def say_hi(to)
        "Saying hi to "+to
    end
    def method_missing(name, *args)
        say_hi(*args)
    end
end

irb(main):022:0> Test.new.say_hi("mike")
=> "saying hi to mike"
irb(main):023:0> Test.new.say_hello("mike")
=> "saying hi to mike"

 

 

This hello, or hi method does not exist in Test class, but they are acting as if they exist, thats why they are called ghost methods.

Every ruby class has a send method, that takes method name as string argument and capable to run that method.

class Test
    def say_hi(to)
        "Saying hi to "+to
    end
    def say_bye(to)
        "Saying bye to "+to
    end
    def method_missing(method, *args)
        self.send("#{method}",*args)
    end
end

irb(main):023:0> Test.new.say_hi("mike")
=> "saying hi to mike"
irb(main):022:0> Test.new.hi("mike")
=> "saying hi to mike"
irb(main):023:0> Test.new.say_bye("mike")
=> "saying bye to mike"
irb(main):022:0> Test.new.bye("mike")
=> "saying bye to mike"

 

 

We have seen how to call a method dynamically, but we may also need to call another class name dynamically, for that Object.const_get takes string as parameter and call that class. (Object.const_get "Test").new.send("hi","mike") is the same as Test.new.hi("mike").

irb(main):023:0> (Object.const_get "Test").new.send("hi","mike")
=> "saying hi to mike"

 

In ruby meta programming there is a method called eval which exicutes a string as ruby code.

irb(main):025:0> eval("Test.new.hi('mike')")
=> "saying hi to mike"

 

Now I think we know everything to attempt to write our ghost method at controller, but writing ghost method with method_missing does not help much in rails controller because rails does not always require methods to be present in controller if corresponding template exists. So when action is missing it starts to attempt rendering templates, thats why method_missing does not invokes. But fear not, we have action_missing which is being called when ruby failed to find action. Here is an example of our car order case:

  def action_missing(m, *args, &block)
    if m.starts_with? "add_"
      k=(m.split "_", 2) [1]

      @current_build = get_current_build
      qty=params[:qty]      

      build= (Object.const_get "#{k.camelize}Build").create({(k+"_id").to_sym => params[(k+"_id").to_sym], :market_status_id => params[:market_status]})
      eval("@current_build.#{k}_builds << build")
      redirect_to current_build_url
    else
      super
    end
  end

Here we go, we have saved at least 100 lines of code, not only that, now we need to change only one line of code on change of methods, where previously, we had to change 100 lines on change.

Preparing DataCleaner with RSpec, Capybara, Selenium for testing Rails app

“data_cleaner” is a beautiful gem that cleans database. We can use it for cleaning our database for each test case. I have been banging my head for hours to make it work, even though I had followed the right documentation. So I thought may be I would look “COOL” if I could write a blog describing steps, at the same time it could be helpful to other.

I would add at my Gemfile:

gem 'database_cleaner'

Then I would like to do a

bundler install

Now when database_cleaner gem is install I want to add DatabaseCleaner to my RSpec configuration, it is always a good idea to keep this code separated, so I have created a different file ‘support/database_cleaner.rb’. We need to set DataBase cleanup strategy, by default RSpec’s get/post request driven testing uses  Transaction but Capybara, Selenium which is being used for testing js uses Truncation as strategy so we will need to write at least two different strategies. It is also good practice to clean up everything before we run the test so even if database had any data by mistake, it can not mess with my test cases. So my ‘support/database_cleaner.rb’ looks like:

RSpec.configure do |config|
  #It runs before the entire test suite runs and it clears the test database. 
  config.before(:suite) do
    DatabaseCleaner.clean_with(:truncation)
  end
  
  # itsets the default database cleaning strategy to be transactions. 
  config.before(:each) do
    DatabaseCleaner.strategy = :transaction
  end
  
  #Tests which is flagged as :js => true by default, these tests using Capybara 
  # using Capybaras test server and firing an actual browser window via the Selenium 
  # backend. For these types of tests, our previous transactions won’t work, so we need to override
  # the setting and chooses the “truncation” strategy instead.
  config.before(:each, :js => true) do
    DatabaseCleaner.strategy = :truncation
  end


# Now we need to start and end database cleaner.
  config.before(:each) do
    DatabaseCleaner.start
  end

  config.after(:each) do
    DatabaseCleaner.clean
  end

end

 

Now we need to fix few things at  `rails_helper.rb`.
We will add require ‘support/database_cleaner’.

# This file is copied to spec/ when you run 'rails generate rspec:install'
ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../../config/environment', __FILE__)
# Prevent database truncation if the environment is production
abort("The Rails environment is running in production mode!") if Rails.env.production?
require 'rspec/rails'
require 'spec_helper'
require 'support/database_cleaner'

But still it won’t work unless we, change `rails_helper.rb` from:
config.use_transactional_fixtures = true
To:
config.use_transactional_fixtures = false

If you try to put `config.use_transactional_fixtures = false` at `spec_helper.rb` then you may need to change the order of rails and spec_helper at `rails_helper.rb`

require 'rspec/rails'
require 'spec_helper'

or else you may get:

spec_helper.rb:53:in `block in <top (requir
ed)>': undefined method `use_transactional_fixtures=' for #<RSpec::Core::Configuration:0x00000001d971b8>

How ubuntu server boots up: V init vs upstream

How many time did I boot my labtop today? Well, I booted up twice, usually I never move my ass out, but shockingly today I just did. Don’t get too curious about where did I go, because thats not the point, the point of curiosity is what happens when a ubuntu server boots up. Thats what we will try to figure out in this blog. Unlike any other Linux distributions, Ubuntu uses a different startup process for services, known as Upstart. As it has backward compatibility, the difference remain unnoticed most of the time.

When Ubuntu server starts up, first thing it does is it starts up GRUB boot loader. GRUB boot loader stays at least partially at the boot code on the Master Boot Record (the first 512 bytes of hard drive). It selects which Linux kernel the system will boot from and which options to uses when it boots.

When we look at /boot/grub/grub.cfg or /etc/default/grub, we see references of a program called update-grub. This is a helper program that automates the update of GRUB configuration file when new kernels are added. It executes number of configuration scripts stored at /etc/grub.d. When we select a kernel to boot from GRUB menu, it loads the kernel into memory along with its initrd file (initial RAM disk). The initrd file is actually a gzipped cpio archive known as an initramfs file under Ubuntu.  example gz file is initrd.img-2.6.32-14-generic-pae.

When a kernel boots, it needs to be able to at least mount the root file system so that it can access basic configuration files, kernel modules, and system binaries.

Now with the increase of hardware and supporting file systems, it makes sense to support them only if that is necessary. It keeps kernel smaller and flexible.

It needs to accessthe files to mount root file system. The initramfs file provides the kernel the essential kernel modules and system binaries it needs to have to mount theroot file system and complete the boot process.  Grub provide the information about root file system.

When kernel boots, it extracts the initramfs into RAM and runs a script called init. It basically creates some system mount points and mounts the actual root partition. Finally, after this init script has mountedthe real root file system, its last task is to run the /sbin/init program on theroot file system, which starts the next phase of the boot process.

The /sbin/init program is the parent process of every program running onthe system. This process always has a PID of 1 and is responsible for starting the rest of the processes that make up a running Linux system.

UNIX like OS has few standards to initialize, most of the known distributions are using System V init model but Ubuntu Server has switched to a systemknown as Upstart. Ubuntu still some features of System V init such as runlevels and /etc/rc?.d directories for backwardcompatibility. The good thing about upstart is that it manages everything under the hood.

In this V init system, different system states are  known as runlevels. When V init system starts it goes through the configuration file located at /etc/inittab and discovers its default runlevel. Then it enters to that runlevel and starts processes that has been configured to run at that runlevel. Runlevels are labeled by numbers ranging from 0 to 6. For an instance, runlevel 0 is reserved for a halted system state. When we enter runlevel 0, the system shuts down all running processes, unmounts all file systems, and powers off. Likewise, runlevel 6 is reserved for rebooting the machine. Runlevel 1 is reserved for single-user mode a state where only a single user can log in to the system with only few process running which comes very handy for digonosis. Even in the default GRUB menu you will notice a recovery mode option that boots you into runlevel 1.

Runlevels 2 through 5 are left for distributions and us to define. So we can create our own runlevels. Traditionally in Linux distributions one runlevel is allocated for graphical desktop (eg. runlevel 5 of Red Hat) and another runlevel for a system with no graphics (Eg. runlevel 3 of RedHat). User also has scope to create his own run level, for instance, maybe starting up a system without network access some time could come handy, so we can define it as a runlevel . In that case we need to pass an argument at boot prompt to override the default runlevel with desired runlevel. Once the system is booted, we can change the current runlevel with the init commands using sudo init 1.

/etc/init.d directory contains all of the start-up scripts of all services of all runlevels. This scripts usually contains start and stop commands.

After the choice of runlevel, init goes to /etc/rcS.d and runs each script that begins with an S in numerical order with start as an argument. Finally init is finished but stays running in the background, waiting for the runlevel to change.

init scripts has few draw backs for an instance, if a service dies in before completing the task it does not automatically starts the process. So we need another tool to monitor this process succeedeed or not. Init scripts is are generally affected by either change in runlevel or when the system starts up but some reason not executed. A perfect example would be Init scripts that depend on a network connection. On Ubuntu the init script that establishes the network connection is called networking. As we know it follow a neurmaric sequence, any init scripts that depend on a network connection are named with a higher number than this init script to ensure they run after the networking script has run. Lets imagine a situation where you boot up your system at the time when your network cable is unplugged. So in V init system, network init will run and failed and other connection will time out one by one.

It was designed not only to address some of the shortcomings of the System V init process, but also to provide a more robust system for managing services. Upstart solves this problem because upstart is event driven. Upstart can be configured to take action based on those events. Some sample events might be system start-up, system shutdown, the Ctrl-Alt-Del sequence being pressed, the runlevel changing, or an Upstart script starting or stopping. Upstart also constantly monitors the system for certain events to occur, and when they do,

Upstart does not completely replace System V init, or the functionality of init and the /etc/inittab files or changes of runlevels, but instead more core functionality is being ported to Upstart scripts. The difference is that Upstart now starts and stops services when runlevels changes. Upstart script are defined with either the script or exec options. Exec option keeps track of its PID. The convention is to keep track of these PIDs in the /var/run/ directory. With the script option, Upstart treats the lines that follow as a shell script until it reaches the end script line. Upstart provides methods to check the status of Upstart jobs and start and stop them as appropriate. We can check the status, start, and stop Upstart scripts with the appropriately named status, start, and stop commands. For example we can use sudo /etc/init.d/networking status, ubuntu short hand of this command is sudo service networking status. To disable an init script from starting up, we need to use sudo update-rc.d -f servicename remove, and to enable a script we need to use sudo update-rc.d servicename defaults. When we need to write out own script, we should start from the sceleton provided by ubuntu at /etc/init.d/skeleton. init scripts reside in /etc/init.d and have symlinks in /etc/rc?.d directories, where ? is a number representing a runlevel. So when we create our own script we need to choose a value for rc?.d wisely. The lower the value early it runs. We need to be careful about the dependency.

In Ubunutu services are managed two ways, either through init scripts or using xinetd. Xinetd is an updated and resource efficient version of the classic inetd service. When a system boots a init scripted service starts, the service could sit idly for decades before it gets accessed, wasting valuable server resources. On the other hand xinetd listen to ports its child services uses. If a connection is made on one of the ports, xinetd will then spawn the service that corresponds to that port, and once the connection is finished, the service exits until it is needed again.

Thanks:
1. BRACU Ayesha Abed Library
2. Kyle Rankin and Benjamin Mako Hill
3. My boredom 😛