Mermaid from《Pirates of the Caribbean: On Stranger Tides》

下午陪女友去看了<加勒比海盗4>,片子还算不错。美人鱼很惊艳,甚至一定程度上抢了Joony Deep的风头。当然人鱼之恋也很扯….

给一个建议是不要去看3D版的,顺便放几张图片:
potc4_wallpaper_01-580x407

5745451886_b7010d3ef9_z

Pirates of the Caribbean On Stranger Tides - Mermaid Poster1

pirates_of_the_caribbean_on_stranger_tides_mermaid_bones1

pirates-of-the-caribbean-on-stranger-tides-movie-image-mermaid-01-600x398

resized_French_actress_Astrid_Berg_s_Frisbey_

Astrid-Bergès-Frisbey

image007

5260217156_0c114f472e_z

ward-pirates3_141210102733

gemmaward_wideweb__470x305,0

Gemma-Ward

最近看中的几款Limitless的家具

G-bed
37898.752
G-sofa:
35640.676
TV Rack:
81966.972
HANNAH Coffee Table
62827.53

节后的人才市场开始活跃了?

今天居然接到一个印度打来的电话,一印度MM操着纯正的印度英语居然说是从Linkedin上找到我的,希望我能参加他们杭州公司的面试,费了很大的劲也没让她弄明白我不愿意relocate。

只能说节后的人才市场就像快烧开的水,分子们开始活跃了。

Google Nexus S重启bug被官方确认

Google员工已经官方确认了这个意外重启bug,并宣布其会主动协同Samsung解决这一麻烦的根源;就最近的修复申明来看他们似乎已经找到了问题的root cause,可以预期这个reboot bug可以在短期内得到修复。据分析这次的问题出在制造商身上,而非由Android 2.3姜饼操作系统引起。至少现在已购买了Nexus S(譬如我)的用户可以松一口气了,修复补丁已经在路上了!

对于那些还没有购买Nexus S且目前仍跃跃欲试的朋友,等待一段时间可以说是最好的选择;或者你希望了解更多更详实的关于Nexus S的消息,那么G4Games会是一个最佳的信息来源!

电影’社交网络’获金球奖最佳影片,最佳编剧,最佳导演,最佳配乐奖


大约是在3个月前看了这部电影,谈不上是大片但却令我难得深深震撼。经过今年金球奖颁奖典礼上的多项提名,电影’社交网络’获得了包括最佳影片 (戏剧类),最佳编剧(阿伦·索尔金),最佳配乐(特伦特·雷兹)和最佳导演奖的(大卫·芬奇)的多个奖项。这部电影受到绝大多数影评家的好评,获得殊荣可以说是实至名归。

十分有趣的是电影编剧索尔金和制片人斯科特鲁丁都在他们的获奖发言稿总中感谢电影中主角的原型Facebook的首席执行官马克·扎克伯格。

在不久到来的本月二十五日我们,我们也可以期待这部电影是否能获得奥斯卡奖提名。

Welcome to Nexus S?


周六入手了Nexus S,发觉绑定账号后直接使用gmail的contact通讯簿还是件挺cool的事情,周一就收到了Android Team发来的welcome邮件:

Google and Samsung have partnered to bring you Nexus S, a pure Google experience phone. Learn more:

Once registered, you can contact Samsung directly at +1 855-EZ2NEXUSS (+1 855-392-6398) for support.
Sign up to to receive updates and promotions about Nexus S.

Enjoy!
The Android Team at Google

要说在Nexus S上最令人沮丧的事情是什么,那么绝对是无法体验google定制的Youtube这一遗憾。

Linode vps磁盘速度实测

事实证明Linode无愧于众多业界人士对其的推崇,今天实测了一下其磁盘速度真的不俗:

[root@li229-25 ~]# hdparm -tT /dev/xvda

/dev/xvda:
 Timing cached reads:   25536 MB in  1.99 seconds = 12843.60 MB/sec
 Timing buffered disk reads:  340 MB in  3.00 seconds = 113.20 MB/sec

[root@li229-25 ~]# dd if=/dev/xvda of=/root/dump bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 18.9223 seconds, 55.4 MB/s

以上为Linode vps的成绩,dd的速度为55MB/s

一下为笔者的台式机电脑,使用普通的西数硬盘

[root@rh2 ~]# cat /proc/scsi/scsi 
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: WDC WD3200AAJS-0 Rev: 01.0
  Type:   Direct-Access                    ANSI SCSI revision: 05

[root@rh2 ~]# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   9132 MB in  2.00 seconds = 4569.83 MB/sec
 Timing buffered disk reads:  306 MB in  3.01 seconds = 101.72 MB/sec
[root@rh2 ~]# dd if=/dev/sda of=/root/dump bs=1024k count=1000  
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 22.6009 seconds, 46.4 MB/s

Linode虚拟服务器的磁盘速度略优于普通pc的磁盘速度,作为vps性能还是不错的;
如果作为网页服务器的话,结合memcached等缓存技术,一般来说IO性能不会成为主要瓶颈。

Google DataWiki如何区别于FluidDB

谷歌公司最近在其Google Lab上启动了数据维基(DataWiki)的项目。据谷歌官方称DataWiki将会是”一种数据结构化的维基”。根据其页面介绍,该项目理念来自于2010年海地地震期间发展起来的人物搜索(Person Finder)应用。谷歌开发者看到了创建结构化数据共享系统的急切需求。

该项目乍听起来与FluidDB十分相似,FluidDB常被形容为”一种被托管的维基核心数据库”,FluidDB的Nicholas H.Tollervey很愿意为大家解释这2个项目有何种不同。

DataWiki是用来快速构建简单且特定用途的数据库-例如Person Finder。而FluidDB则试图构建大型数据库所需要的一切。

就Tollervey提出的,这2个项目间的存在主要差别有:

  • 结构:DataWiki的每一页都将遵循某种预定义的结构。而FluidDB则不会将某种模式强加给用户,并且事物总是以对象的形式表达出来而非列表”。
  • 审核:DataWiki似乎不准备提供任何访问控制机制。FluidDB有一个权限系统以控制那些用户有权去使用特定的标签或命名空间。
  • 搜索:我们只能搜索特定的DataWiki页面。而在FluidDB中,我们可以在权限允许的情况下跨越数据集地搜索数据。

想了解更多关于FluidDB的消息可以阅读<FluidDB in a Nutshell>:

Script:Monitoring Memory and Swap Usage to Avoid A Solaris Hang

Applies to:

Solaris SPARC Operating System – Version: 8.0 and later   [Release: 8.0 and later ]
Solaris x64/x86 Operating System – Version: 8 6/00 U1 and later    [Release: 8.0 and later]
Oracle Solaris Express – Version: 2010.11 and later    [Release: 11.0 and later]
Information in this document applies to any platform.

Goal

Shortage of memory and virtual swap can result in slow system performance, hang, failure to start new process (fork failure), cluster timeout and thus unplanned outage. It is critical for system availability to monitor resource usage.

Solution

Physical Memory Shortages

Memory shortages can be caused by excessive kernel or application memory allocation and leaks. During memory shortages, the page daemon wakes up and starts scanning and stealing pages to bring the freemem, kernel global variable, value over the lotsfree kernel threshold. Systems with memory shortages slow down because memory pages may have to be read from the swap disk in order for processes to continue executing.

High kernel memory allocation can be monitored by using mdb’s memstat command. It reports kernel, application and file system memory usage:

# echo "::memstat"|mdb -kPage Summary       Pages      MB    %Tot
————    ———– ——  —-
Kernel            18330      143     7% < Kernel Memory
ZFS File Data         4        0     0% < ZFS cache (see below)
Anon              36405      284    14% < Application memory: heap, stack, COW
Exec and libs      1747       13     1% < Application libraries
Page cache         3482       27     1% < File system cache
Free (cachelist)   3241       25     1% < Free memory with vnode info.intact
Free (freelist)  195422     1526    76% < Free memory

Total            258627     2020
Physical         254812     1990

 

If system is running ZFS, then ZFS cache will also be listed. ZFS uses kernel memory to cache filesystem blocks. You can monitor ZFS cache memory usage using:

# kstat -n arcstats

kstat reports kernel memory usage in pages [8k(sparc), 4k(intel)]. It also reports memory in use by kernel and pages locked by applications.

# kstat -n system_pagesmodule: unix instance: 0
name: system_pages class: pages

freemem         8337355 < available free memory
..
lotsfree         257271 < Paging starts when freemem drops below lotsfree
minfree           64317 < swapping will start if freemem drops below minfree
pageslocked     4424860 < pages locked excluding pp_kernel (kernel pages)
pagestotal     16465378 < total pages configured>
physmem        16487075 < total pages usable by solaris
pp_kernel       4740398 < memory allocated in kernel

kmstat reports memory usage in kernel slab caches. These caches are used by various kernel subsystem and drivers for allocating memory.

# echo "::kmastat"|mdb -kcache                    buf     buf     buf       memory   alloc      alloc
name                     size    in use  total     in use   succeed    fail
———————-  ——   ——  ——    ——   ———  —–
..
kmem_slab_cache            56     2455     2465    139264       2571     0
kmem_bufctl_cache          24     5463     5763    139264       6400     0
kmem_bufctl_audit_cache   128        0        0         0          0     0
kmem_va_8192             8192       74       96    786432         74     0
kmem_va_16384           16384        2       16    262144          2     0
kmem_va_24576           24576        5       10    262144          5     0
kmem_va_32768           32768        1        8    262144          1     0
kmem_va_40960           40960        0        0         0          0     0
kmem_va_49152           49152        0        0         0          0     0
kmem_va_57344           57344        0        0         0          0     0
kmem_va_65536           65536        0        0         0          0     0
kmem_alloc_8                8    97210    98649    794624    3884007     0
kmem_alloc_16              16    29932    30988    499712    9786629     0
kmem_alloc_24              24    43651    44409  1073152    69596060     0
kmem_alloc_32              32    11512    12954    417792   71088529     0

To isolate issues with high kernel memory allocation and leak, one needs to turn ON kernel memory auditing by setting a tunable below in /etc/system file and reboot:

set kmem_flags=0x1

Continue to run kmastat on a regular basis and monitor the growth of kernel caches. Force a system panic when kernel memory allocation reaches an alarming level. Send the kernel core dump located in /var/crash directory to oracle support for analysis:

To monitor application memory usage consider using:

$prstat -s rss -can 100$ps -eo ‘addr zone user s pri pid ppid pcpu pmem vsz rss stime time nlwp psr args’

To see which memory segment in the process has high memory allocation:

$pmap -xs <pid>

Continued growth in application memory usage is a sign of a memory leak. You may request the application vendor to provide you tools or consider linking to libumem(3LIB) that offers a rich set of debugging facilities. See article on how to use it. You can monitor application malloc() using DTrace scripts.

Process allocation (via malloc()) requested size distribution plot:

dtrace -n 'pid$target::malloc:entry { @ = quantize(arg0); }' -p PID

Process allocation (via malloc()) by user stack trace and total requested size:

dtrace -n 'pid$target::malloc:entry { @[ustack()] = sum(arg0); }’ -p PID

 

Virtual Memory Shortages:

Processes use virtual memory. A process’ virtual address space is made up of a number of memory segments: text, data, stack, heap, cow segments. When a process accesses the virtual address, it results in a page fault that brings the data into physical memory. The faulted virtual address is then mapped to physical memory. All pages reside in the memory segment and have backing store where the pages within the segment can be migrated during memory shortages. Text/data segments are backed by executable file on the file system. Stack, heap, COW (copy-on-write) and shared memory pages are anonymous (Anon) pages and they are backed up by virtual swap.

ISM segment does not require swap reservations considering all pages are locked in memory by kernel and are not candidate for swapping.

DISM requires swap reservation considering memory can be locked and unlocked by the process.

When process use DISM it selectively increases the size of SGA by locking the ranges. Failure to lock the DISM region and continue using it as SGA for DB block caching may result in slow Oracle DB performance because accessing these pages result in page fault and that will slow down the oracle. See Doc: 1018855.1

When a process starts touching pages then anon structures are allocated, there is no physical disk swap allocated. Swap allocation in Solaris only happens when memory is short and pages need to be migrated to the swap device to keep up with workload memory demand. That is the reason, “swap -l” that reports physical disk swap allocation shows same value in “block” and “free” columns during normal conditions.

Solaris can run without physical disk swap and that is due to swapfs abstraction that acts as if there is a real swap space backing up the page. Solaris works with virtual swap and it is composed of physical memory and physical disk swap. When there is no physical disk swap configured, swap reservation happens against physical memory. Swap reservation against memory has a draw back and that is the system cannot do malloc() bigger than the physical memory configured. Advantage of running without physical disk swap is that the malicious program unable to do huge mallocs and thus cannot cause the system to crawl due to memory shortages.

Virtual swap = Physical memory + Physical Disk swap
Available virtual swap is reported by:

  • vmstat: swap
  • swap -s

Disk back swap is reported by:

  • swap -l


Per process virtual swap reservation can be displayed:

 

  •  pmap -S <pid>

prstat can provide virtual memory usage (SIZE) of the process, however it contains all virtual memory used by all memory segment not just anon memory:

  • prstat -s size -can 100 15″
  • prstat -s size -can -p <pidlist> 100 15

You can dump the process address space showing all segment using:

  • pmap -xs <pid>

 

When a process calls malloc()/sbrk() only virtual swap is reserved. Reservation is done against the physical disk swap first. If that is exhausted or not configured then reservation is done against physical memory. If both are exhausted then malloc() fails. To make sure malloc() won’t fail due to lack of virtual swap configure large physical disk swap in the form of disk or file. You can monitor swap reservation via “swap -s” and “vmstat:swap”, as described above

On a system with plenty of memory, “swap -l” reports the same value for “block” and “free” column

“swap -l” reporting a large value in “free” does not mean that there is plenty of virtual swap available and thus malloc will not fail because “swap -l” does not provide information about virtual swap usage, it only provides information about physical disk swap allocation. It is “swap -s” and “vmstat:swap” that reports information about how much virtual swap available for reservation.

Script to monitor memory usage:

#!/bin/ksh

# Script monitors kernel and application memory usage

PATH=/bin:/usr/bin:/usr/sbin; export PATH
trap “killall” HUP INT QUIT KILL TERM USR1 USR2
killall()
{
for PID in $PIDLIST
do
kill -9 $PID 2>/dev/null
done
exit
}

restart()
{
for PID in $PIDLIST
do
kill -9 $PID 2>/dev/null
done
}

DIR=DATA.`date +%Y%m%d-%T`
TS=`date +%Y%m%d-%T`

mkdir $DIR
cd $DIR

while true
do
TS=`date +%Y%m%d-%T`
echo $TS >> mem.out
echo “output of ::memstat” >> mem.out
echo ::memstat|mdb -k >> mem.out
echo “output of kstat -n ZFS ARC memory usage” >> mem.out
kstat -n arcstats >> mem.out
echo “output of ::kmastat” >>mem.out
echo “::kmastat”|mdb -k >> mem.out
echo “output of swap -s and swap -l” >>mem.out
echo “swap -s” >>mem.out
swap -s >>mem.out
echo “swap -l” >>mem.out
swap -l >>mem.out
echo “output of ps” >>mem.out
/usr/bin/ps -eo ‘addr zone user s pri pid ppid pcpu pmem vsz rss stime time nlwp psr args’ >>mem.out
#
# start vmstat, mpstat and prstat in the background
#
PIDLIST=””
echo $TS >>vmstat.out
vmstat 5 >> vmstat.out &
PIDLIST=”$PIDLIST $!”
echo $TS >>mpstat.out
mpstat 5 >> mpstat.out &
PIDLIST=”$PIDLIST $!”
echo $TS >>prstat.out
prstat -s rss -can 100 >>prstat.out &
PIDLIST=”$PIDLIST $!”

sleep 600 # every 10 minutes

restart
done

《让子弹飞》:三个老男人一台戏

晚上陪女友去看了《子弹》,算得上是近几年来最值得一看的国产大片。三个老男人一台戏,这部戏里三个男人的角色还都算比较本色。值得一提的还是葛优的“买官”县长,起到了很好地润滑整部剧的效果。建议还没进电影院的朋友有空去看一看,笑一笑。

沪ICP备14014813号

沪公网安备 31010802001379号