2010년 8월 25일 수요일

현재에 감사하며 살아야지

어제 인터넷에서 이것저것 뉴스를 보다  'SOS 24시'라는 프로그램에서

맞고 멍들어 사는 여자분에 대한 이야기 그후의 이야기가 나왔다

폭행 가해자는 남편이고 파지를 주으며 다니시고 계셨다

 

이런걸 보니 나쁜 애인 안만나서 안맞고 사랑받고, 얼마 되지 않는 월급이지만 일을 해서

돈을 벌수 있는 건강한 몸과 정신이 있고 멋진하늘과 사랑하는 사람들을 볼수 있는 눈과 맛있는걸 먹을수 있는 입과 치아가 있어 행복할수 있다는 걸 잠시 느끼게 해준 하루였다

 

나보다 못생긴 사람도 있고 작은 사람도 있는데 왜 항상 스스로를 낮게 보며 괴롭히고 있을까

 

 

무엇이든 할수 있는 오늘, 그리고 현재에 감사하며 살자

내일이 올수 있는 걸 감사하며 살자.

2010년 8월 24일 화요일

최저 생계비가 143만 9천원이란다...

 

네이버 잠시 들어갔다 본게

 

4인 가구 최저 생계비가 143만원 9천원이란다

 

그럼 뭐...난 겨우 벌어 먹고 살고 있다는거네

 

최저생계비로 어머니 용돈드리고 저금도 하고 살고 있는 난 그나마 행복한건가

 

이 겨우 먹고 살아가는 현실을 어떻게 극복할까.

 

 

2010년 8월 23일 월요일

머리가 텅텅

작년까지만 해도 오만간지 생각들을 하느라 머리가 지끈 거리고 불면증이 생겼던 나인데

딱 30이 된 올해는

이상하게 아무생각도 하기싫고, 해보려해도 몇분지나지 않아 그냥 멍~ 하다

 

작심삼일도 삼일마다 하던나였는데

이젠 뭐 작심삼초다

 

무엇이 나를 이렇게

무료하고 멍하게 만드는걸까.....

 

 

2010년 8월 17일 화요일

xen dom0에 cpu dedicate 설정하기

Managing Xen Dom0′s CPU and Memory

The performance of Xen’s Dom0 is important for the overall system. The disk and network drivers are running on Dom0. I/O intensive guests’ workloads may consume lots Dom0′s CPU cycles. The Linux kernel calculates various network related parameters based on the amount of memory at boot time. The kernel also allocate memory for storing memory metadata (per page info structures) is also based on the boot time amount of memory. After ballooning down Dom0′s memory, the network related parameters will not be correct. Ballooning down busy Dom0′s memory sometimes cause SSH to die from our observation, which is a nightmare for the administrator since SSH is usually the only way for remote control of the server. Another bed effect is that it’s a waste of memory with a large memory metadata for a smaller memory amount.

Now let’s look at how to menage Xen Dom0′s CPU and memory in a better way.

Dedicate a CPU core for Dom0

Dom0 will have free CPU time to process the I/O requests from the DomUs if it has dedicated CPU core(s). Better performance may be achieved by this since there are less CPU context switches to do in Dom0.

We can dedicate CPU core for Dom0 by passing “dom0_max_vcpus=X dom0_vcpus_pin” options to Xen hypervisor (xen.gz) in /boot/grub/grub.conf. X is the number of vcpus dedicated to Dom0.

As hyperthreading technology is enabled in most modern CPUs, we need to specify two processors to dedicate one CPU core. So the “X” above should usually be 2 for one CPU core.

kernel /xen.gz console=vga vga=ask noreboot dom0_max_vcpus=2 dom0_vpus_pin

After booting the system, the VCPU list can be got on Dom0 by this command:

# xm vcpu-list

Even after booting the system, the VCPU number can be configured by xm command. We can set Domain-0 have two VCPUs and processor 0 and 1 to be dedicated to Dom0 by these commands:

# xm vcpu-set Domin-0 2
# xm vcpu-pin Domain-0 0
# xm vcpu-pin Domain-0 1

Dedicate memory for Dom0

We should always dedicate fixed amount of memory for Xen Dom0.

We can set the initial memory size of Dom0 by passing “dom0_mem=xxx” (in KB) option to Xen hypervisor (gen.gz) in /boot/grub/grub.conf.xxx” is the amount of memory for Dom0 in KB.

If we set the initial memory size of Dom0 to 2GB, just change the entry in grub.conf to:

kernel /xen.gz console=vga vga=ask noreboot dom0_max_vcpus=2 dom0_vpus_pin dom0_mem=2097152

Set lowest permissible memory for Dom0

The option dom0-min-mem in Xend configuration file /etc/xen/xend-config.sxp is used to specify the lowest permissible memory for Dom0.

The value of dom0-min-mem (in MB) is the lowest permissible memory level for Dom0. The default value is 256. If we limit the memory size of Dom0 to 2G, just set:

(dom0-min-mem 2048)

Preventing dom0 memory ballooning

The “enable-dom0-ballooning” option in Xend configuration file is used to specify whether Dom0′s memory can be ballooned out. Setting “enable-dom0-ballooning” to “no” will make sure Xend never takes any memory away from Dom0:

(enable-dom0-ballooning no)

Read more:

2010년 8월 16일 월요일

xen에서 VM의 CPU 점유율 변경하기

Credit-Based CPU Scheduler

Introduction

The credit scheduler is a proportional fair share CPU scheduler built from the ground up to be work conserving on SMP hosts. It is now the default scheduler in the xen-unstable trunk. The SEDF and BVT schedulers are still optionally available but the plan of record is for them to be phased out and eventually removed.

Description

Each domain (including Host OS) is assigned a weight and a cap.

Weight

A domain with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended host. Legal weights range from 1 to 65535 and the default is 256.

Cap

The cap optionally fixes the maximum amount of CPU a domain will be able to consume, even if the host system has idle CPU cycles. The cap is expressed in percentage of one physical CPU: 100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc... The default, 0, means there is no upper cap.

SMP load balancing

The credit scheduler automatically load balances guest VCPUs across all available physical CPUs on an SMP host. The administrator does not need to manually pin VCPUs to load balance the system. However, she can restrict which CPUs a particular VCPU may run on using the generic vcpu-pin interface.

Usage

The xm sched-credit command may be used to tune the per VM guest scheduler parameters.

xm sched-credit -d <domain>

lists weight and cap

xm sched-credit -d <domain> -w <weight>

sets the weight

xm sched-credit -d <domain> -c <cap>

sets the cap

Algorithm

Each CPU manages a local run queue of runnable VCPUs. This queue is sorted by VCPU priority. A VCPU's priority can be one of two value: over or under representing wether this VCPU has or hasn't yet exceeded its fair share of CPU resource in the ongoing accounting period. When inserting a VCPU onto a run queue, it is put after all other VCPUs of equal priority to it.

As a VCPU runs, it consumes credits. Every so often, a system-wide accounting thread recomputes how many credits each active VM has earned and bumps the credits. Negative credits imply a priority of over. Until a VCPU consumes its alloted credits, it priority is under.

On each CPU, at every scheduling decision (when a VCPU blocks, yields, completes its time slice, or is awaken), the next VCPU to run is picked off the head of the run queue. The scheduling decision is the common path of the scheduler and is therefore designed to be light weight and efficient. No accounting takes place in this code path.

When a CPU doesn't find a VCPU of priority under on its local run queue, it will look on other CPUs for one. This load balancing guarantees each VM receives its fair share of CPU resources system-wide. Before a CPU goes idle, it will look on other CPUs to find any runnable VCPU. This guarantees that no CPU idles when there is runnable work in the system.

Glossary of Terms

  • ms: millisecond

  • Host: The physical hardware running Xen and hosting guest VMs.

  • VM: guest, virtual machine.

  • VCPU: Virtual CPU (one or more per VM)

  • CPU/PCPU: Physical host CPU.

  • Tick: Clock tick period (10ms)

  • Time-slice: The time-slice a VCPU receives before being preempted to run another (30ms)

  • Period: The accounting period (30ms). Once per period, credits earned are recomputed.

  • Weight: Proportional share of CPU per guest VM

  • Cap: An optional upper limit on the CPU time consumable by a particular VM.