[Python] SyntaxError: Missing parentheses in call to ‘print’

因為在 Ubuntu 上目前安裝的 Python 版本還是 2.7
而之前在 Windows 安裝的版本為 3.0
導至把 Ubuntu 上寫好的 code 拿至 Windows 上編譯時會發生下列錯誤…
SyntaxError: Missing parentheses in call to ‘print’

這時候參考stackoverflow: What does “SyntaxError: Missing parentheses in call to ‘print’” mean in Python?
得知要把 print 語法修正一下

原本使用的是

print "字串"

要更改為

print("字串")

差別就是用 ( ) 包起來~

Posted in Python | Leave a comment

vCenter: Could not allocate space for object …

最近 vCenter 會出現某台 ESXi 主機的 “主機連線和電源狀態” 警告…
所以根據 Host connection and Power state in ESXi 5.0 的說明
把 vCenter 上的防火牆設定 開啟 902, 5989, 80,443,623, 8000, 8001, 6500, 514,8080 這幾各 Port

然後觀察一下 vCenter 那台的系統日誌發現了幾個錯誤…
1. Could not allocate space for object ‘dbo.VPX_EVENT_ARG’.’PK_VPX_EVENT_ARG’ in database ‘VIM_VCDB’ because the ‘PRIMARY’ filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
2. CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 4096 MB per database.

因為當初安裝的版本是 mssql 2005 excpress…要解決 2 的問題只有升級了~這危險的動作就先不做
於是先針對 1 進行修正
先下載 Microsoft SQL Server Management Studio Express 2005 進行安裝
然後根據 Purging old data from the database used by VMware vCenter Server 4.x and 5.x (1025914) 的步驟進行
1. 開啟 VIM_VCDB -> dbo.VPX_PARAMETER 修改兩個屬性值
(1) 修改 event.maxAge 為 30 ( 網頁上說的,只是我依舊保留 180 ),修改 event.maxAgeEnabled 為true
(2) 修改 task.maxAge 為 30 ( 同上 ),修改 task.maxAgeEnabled 為true
2. 執行預存程序 VIM_VCDB -> 可程式性 -> 預存程序 -> dbo.cleanup_events_tasks_proc

這樣就會把許早之前的 log 先清空,爭取一點空間吧~!

Posted in MSSQL, VMware | Leave a comment

Gitbook

已經大概把自建的 Gitbook 從頭到尾摸過一次
詳細的內容可以參考 Cowman’s Gitbook Bookstore

目前的心得是
1. Web 顯示介面簡單,尤其適合 Mobile 裝置
2. 目前有需多外掛的支援,基本使用已經足夠
3. 建議公開的頁面以 Build 後產出的靜態 HTML 網頁為主,動態的 reserv 網站就留給開發測試之用
4. 產出電子書還是會因為 plugin 而發生缺圖的情形,需要直接插入圖片使用會比較好

後續有機會、有時間就會開始把之前的筆記慢慢轉成一本一本的Gitbook~~

Posted in Gitbook | Leave a comment

Docker

前幾天從 Louk 學長的臉書分享了一篇介紹 Docker 的文章 連結
這一篇內容還蠻棒的,然後昨天老妖怪就找我討論 Docker 了 XD
討論中老妖怪說了一個蠻好的例子~ Docker 就像是沒有 OS 的 VM ~ 我覺得還蠻貼切的 XD
雖然並非是完全不用 OS,而是他使用了 OS 的基礎 Image,或許有點像是之前 VMware 會提到的 Linked mode VM
透過減少 OS 的重覆除了可以減少空間的浪費,也可以增加快速部署的方便性

仔細看了一下 Docker 的官網
在使用的過程中可能會有不斷的取出、建立、取出、建立的動作
目的是要能夠建立起屬於自己的 Docker image

當建立完 Docker image 後,可以推送至 Docker.io 儲存
Docker.io 只要註冊帳號就可以享用,且還是免費的
但是推上去的 image 就是屬於 public 的~只要至到你的帳號跟 image 名稱~就可以 pull 回來了….
這部份也有自建的 docker registry 可以使用 參考連結

看來 Docker 這一部份還有很多東西可以玩~

Posted in Docker | Leave a comment

Gitbook 超過兩層目錄無法創建檔案/目錄的解套

因為gitbook作者認為一本書的章節目錄不應該過於複雜
所以原本預設只有兩層目錄
在去年一堆相關討論串結束後雖然已讓目錄可以超過兩層
但是gitbook (github源)卻無法建立相對應的目錄及檔案

在爬了許多code後 (/usr/lib/node_modules/gitbook)決定還是自己寫一個parse code來處理

這邊在讀完SUMMARY.md後會自動偵測檔案目錄是否不存在
不存在就直接創建
預設只抓三層檔案目錄的結構

# !/usr/bin/python
# -*- coding: utf-8 -*-

import re
import os

for line in open("SUMMARY.md"):
    try:
        src1 = re.search('((.+?))', line)
        src = src1.group(1)
    except:
        print line
    else:
        cnt = src.count('/')
        if cnt == 3:
            str = re.search('(.+?)/(.+?)/(.+?)/(.*)', src)
            path = str.group(1) + "/" + str.group(2) + "/" + str.group(3)
            file = str.group(1) + "/" + str.group(2) + "/" + str.group(3) + "/" + str.group(4)
            if not os.path.isdir(path):
                print "create folder: " + path
                os.mkdir(path)
            if not os.path.isfile(file):
                print "create file: " + file
                open(file, "a").close()
            print str.group(1) + ".." + str.group(2) + ".." + str.group(3) + ".." + str.group(4)

        elif cnt == 2:
            str = re.search('(.+?)/(.+?)/(.*)', src)
            path = str.group(1) + "/" + str.group(2)
            file = str.group(1) + "/" + str.group(2) + "/" + str.group(3)
            if not os.path.isdir(path):
                print "create folder: " + path
                os.mkdir(path)
            if not os.path.isfile(file):
                print "create file: " + file
                open(file, "a").close()
            print str.group(1) + ".." + str.group(2) + ".." + str.group(3)
Posted in Gitbook, Python | Leave a comment

Sublime Text 安裝 SFTP 模組

  1. ctrl + shift + p => 輸入 install package,選擇
  2. 輸入 SFTP 進行安裝
  3. 開啟要同步的資料夾
  4. View => Side bar => Show Side bar
  5. 點選 Side bar 中的資料夾按右鍵,SFTP/FTP => Map to Remote
  6. 設定 “host”、”user”、”port”、”remote_path”,並將 “save_before_upload” 設定為 true
  7. 點選 Side bar 中的資料夾按右鍵,SFTP/FTP => Download Folder

開始做事吧~

— 2015/01/14 update —
後來跳出說這要license…..繼續vim吧 Orz

Posted in Sublime Text 2 | Leave a comment

網路負載平衡示意

最近在幫忙長官修正非常傳統的投影片,被吩咐要盡量色彩繽紛、生動活潑….
所以變成了下面這幾張圖…
算是工作之餘的一點娛樂了Orz…

傳統專線規劃容易發生你塞我不塞的情況~
傳統專線規劃

這在加入負載平衡設備後~可以將所有頻寬分享分配進行改善~
加入負載平衡設備

當傳統的專線斷線後是個悲劇….
專線斷線

同樣的在使用負載平衡設備後可以改善~
加入負載平衡

這是傳統專線規劃的kuso版~會發現後面有人等到插腰不耐煩~有人等到垂頭沮喪~
專線規劃kuso版

這邊是要呈現有惡意程式也在排隊 (有個頭有長角的人)
有惡意程式

這邊主要是說加入防火牆機制後~可以阻擋惡意程式的攻擊
阻擋惡意程式

Posted in Network | Leave a comment

Win 7 使用 Visual Studio 2005 時無法使用網站進行開發

當遇到錯誤訊息
您必須是本機電腦上要存取 IIS metabase 的系統管理員群組成員。因此,您無法建立或開啟本機 IIS 網站。如果您有讀取、 寫入和修改檔案所在的資料夾的權限,您可以建立指向的資料夾,以便編輯這些檔案的檔案系統網站。
而僅能使用檔案系統方式進行開發時

  1. 設定使用管理者模式啟動visual studio
  2. 控制台 -> 程式集 -> 開啟或關閉 Windows 功能 -> 網際網路資訊服務 -> 網站管理工具 -> IIS 6 管理相容性,選擇IIS 中繼庫和 IIS 6 組態相容性進行安裝

參考: 當您嘗試在 Windows Vista 架構的電腦上建立 Visual Studio 2005 ASP.NET 專案時,收到錯誤訊息

Posted in Visual Studio 2005 | Leave a comment

開啟 Ubuntu Console 彩色顯示

編輯 ~/.bashrc

#force_color_prompt=yes

變為

force_color_prompt=yes
Posted in Ubuntu | Leave a comment

淺談 Elasticsearch 的定義

[摘錄自Elasticsearch Server, 2nd Edition第12頁]

The basics of Elasticsearch
Elasticsearch is an open source search server project started by Shay Banon and published in February 2010. During this time, the project has grown into a major player in the field of search and data analysis solutions and is widely used in many more or lesser-known search applications. In addition, due to its distributed nature and real-time capabilities, many people use it as a document store.

Elasticsearch是一個由Shay Banon起頭的開放原始碼專案,並且在2010年的二月進行公開。發展至今,Elasticsearch已經成為搜尋或是資料分析解決方案中不可或缺的重要參與者,並且被廣泛的使用在許許多多應用開發中。也因為它具有分散式以及即時處理的特性,許多使用者將他用於文件儲存使用。

Index
Index is the logical place where Elasticsearch stores logical data, so that it can be divided into smaller pieces. If you come from the relational database world, you can think of an index like a table. However, the index structure is prepared for fast and efficient full-text searching, and in particular, does not store original values. If you know MongoDB, you can think of the Elasticsearch index as a collection in MongoDB. If you are familiar with CouchDB, you can think about an index as you would about the CouchDB database. Elasticsearch can hold many indices located on one machine or spread over many servers. Every index is built of one or more shards, and each shard can have many replicas.

Index可視為一個邏輯的空間,Elasitcsearch用於存放邏輯的資料,也因為並非是原始資料,所以這些資料可以被分成許多小片段進行存放。如果你之前有使用過關聯式資料庫的經驗,可以把index想做是table的型態。然而,index的結構主要是為了在全文搜尋時能夠快速且有效地進行。如果你知道MonogoDB,則你可以將Elasticsearch index視為是MonogoDB的collection;又或是假如你知道CouchDB,則也可以將index視為是CouchDB的資料庫。Elasticsearch可以在一臺主機上同時管理許多index,又或是散布在許多台電腦上。每一個index是由一個或多個shards組成,而每一個shard又包含許多的複本抄寫。

Document
The main entity stored in Elasticsearch is a document. Using the analogy to relational databases, a document is a row of data in a database table. When you compare an Elasticsearch document to a MongoDB document, you will see that both can have different structures, but the document in Elasticsearch needs to have the same type for all the common fields. This means that all the documents with a field called title need to have the same data type for it, for example, string.
Documents consist of fields, and each field may occur several times in a single document (such a field is called multivalued). Each field has a type (text, number, date, and so on). The field types can also be complex: a field can contain other subdocuments or arrays. The field type is important for Elasticsearch because it gives information about how various operations such as analysis or sorting should be performed. Fortunately, this can be determined automatically (however, we still suggest using mappings). Unlike the relational databases, documents don’t need to have a fixed structure—every document may have a different set of fields, and in addition to this, fields don’t have to be known during application development. Of course, one can force a document structure with the use of schema. From the client’s point of view, a document is a JSON object (see more about the JSON format at http://en.wikipedia.org/wiki/JSON). Each document is stored in one index and has its own unique identifier (which can be generated automatically by Elasticsearch) and document type. A document needs to have a unique identifier in relation to the document type. This means that in a single index, two documents can have the same unique identifier if they are not of the same type.

Document type
In Elasticsearch, one index can store many objects with different purposes. For example, a blog application can store articles and comments. The document type lets us easily differentiate between the objects in a single index. Every document can have a different structure, but in real-world deployments, dividing documents into types significantly helps in data manipulation. Of course, one needs to keep the limitations in mind; that is, different document types can’t set different types for the same property. For example, a field called title must have the same type across all document types in the same index.

Mapping
In the section about the basics of full-text searching (the Full-text searching section), we wrote about the process of analysis—the preparation of input text for indexing and searching. Every field of the document must be properly analyzed depending on its type. For example, a different analysis chain is required for the numeric fields (numbers shouldn’t be sorted alphabetically) and for the text fetched from web pages (for example, the first step would require you to omit the HTML tags as it is useless information—noise). Elasticsearch stores information about the fields in the mapping. Every document type has its own mapping, even if we don’t explicitly define it.

Key concepts of Elasticsearch
Now, we already know that Elasticsearch stores data in one or more indices. Every index can contain documents of various types. We also know that each document has many fields and how Elasticsearch treats these fields is defined by mappings. But there is more. From the beginning, Elasticsearch was created as a distributed solution that can handle billions of documents and hundreds of search requests per second. This is
due to several important concepts that we are going to describe in more detail now.

Node and cluster
Elasticsearch can work as a standalone, single-search server. Nevertheless, to be able to process large sets of data and to achieve fault tolerance and high availability, Elasticsearch can be run on many cooperating servers. Collectively, these servers are called a cluster, and each server forming it is called a node.

Shard
When we have a large number of documents, we may come to a point where a single node may not be enough—for example, because of RAM limitations, hard disk capacity, insufficient processing power, and inability to respond to client requests fast enough. In such a case, data can be divided into smaller parts called shards (where each shard is a separate Apache Lucene index). Each shard can be placed on a different server, and thus, your data can be spread among the cluster nodes. When you query an index that is built from multiple shards, Elasticsearch sends the query
to each relevant shard and merges the result in such a way that your application doesn’t know about the shards. In addition to this, having multiple shards can speed up the indexing.

當我們有大量的文件時,可能會因為只有單一node而無法快速地回應使用者的請求,原因有可能是因為記憶體的限制,硬碟空間的限制或是處理的效能不夠等。有鑑於此,資料可以分成多個小塊,在這裡稱為shard,也就是切割後的apache luncene index。每一個shard可以放在不同的伺服器上,因此你的資料可以散布在叢集的node之中。當你正在查詢一個由多個shards組成的index時,Elasticsearch會傳送查詢給每一個shard持有的node並且將成果整併,在這過程中你的application不會知道Elasticsearch背後所進行的動作。因此,使用多個shards可以加速index的進行。

Replica
In order to increase query throughput or achieve high availability, shard replicas can be used. A replica is just an exact copy of the shard, and each shard can have zero or more replicas. In other words, Elasticsearch can have many identical shards and one of them is automatically chosen as a place where the operations that change the index are directed. This special shard is called a primary shard, and the others are called replica shards. When the primary shard is lost (for example, a server holding the shard data is unavailable), the cluster will promote the replica to be the new primary shard.

Gateway
Elasticsearch handles many nodes. The cluster state is held by the gateway. By default, every node has this information stored locally, which is synchronized among nodes. We will discuss the gateway module in The gateway and recovery modules section of Chapter 7, Elasticsearch Cluster in Detail.

Posted in Elasticsearch | Leave a comment