Categories
- FFMpeg (5)
- Libav (1)
- Google (3)
- iBeacon (1)
- LDAP (3)
- Me (2)
- Network (11)
- OS (149)
- RTMP (4)
- SIP (1)
- Kamailio (1)
- SNMP (1)
- VMware (20)
- VCP考試 (1)
- 伺服器 網站服務 (105)
- 名詞解釋 (4)
- 專案管理 (1)
- 工具軟體 (50)
- Adobe (1)
- FMS (1)
- Cloudera (1)
- Docker (1)
- Eclipse (4)
- Intellij (2)
- OBS (2)
- Office (10)
- Excel (4)
- PowerPoint (5)
- Postman (1)
- Splunk (13)
- Virtualbox (2)
- Visual Studio (2)
- 文字編輯器 (10)
- Sublime Text 2 (6)
- Sublime Text 3 (3)
- Vim (3)
- 連線工具 (1)
- Xshell (1)
- Adobe (1)
- 程式語言 (79)
- CSS (2)
- HTML (2)
- iOS (1)
- Java (30)
- JavaScript (5)
- jQuery (4)
- jsTree (2)
- JSP (3)
- PHP (16)
- Python (7)
- Ruby (1)
- sed (1)
- Shell Script (8)
- Windows Bash Script (1)
- XML (1)
- 資料庫 (37)
- FFMpeg (5)
Category Archives: 資料庫
Delete snapshots older than 7 days
To avoid running out of disk space in our test environment, we developed a plan to regularly execute shell scripts to clean up unnecessary snapshots.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
#!/bin/bash week=`date --date='7 days ago' +'%Y%m%d'` echo "list_snapshots" | /home/webuser/hbase-1.2.9/bin/hbase shell | grep "pattern " | \ while read CMD; do filename=($CMD) # echo $filename date=`echo $filename | awk -F "_" '{print $2}'` # echo "${filename#*_}" # echo $date # echo ${date:0:8} if [ "${date:0:8}" -lt $week ] then echo "delete_snapshot '$filename'" | /home/webuser/hbase-1.2.9/bin/hbase shell fi done |
Posted in HBase, Shell Script
Comments Off on Delete snapshots older than 7 days
HBase Client 2.5.5 in JDK 11
Because of the vulnerability scan results of the project, the JDK version must be upgraded from 8 to 11 to complete the repair. The original HBase Client version 1.2 we used was incompatible with JDK 11, so we had to … Continue reading
RocksDB Tool
Install rocksdb first.
1 2 |
brew install rocksdb |
Add alias in zshrc
1 2 |
alias ldb = 'rocksdb_ldb --db=. ' |
List all column families
1 2 3 4 |
# ldb list_column_families Column families in .: {default, S1, User, C1, C2, U1, C3} |
Scan command:
1 2 3 4 5 6 7 |
# ldb --column_family=User scan User_U_name : cowman User_U_status : 0 User_U_type : 0 User_U_updatedTimestamp : User_U_userId : U |
Show result with hex value
1 2 3 4 5 6 7 |
ldb --column_family=User scan --value_hex User_U_name : 0x636F776D616E User_U_status : 0x30 User_U_type : 0x30 User_U_updatedTimestamp : 0x0000000000000000 User_U_userId : 0x55 |
Command description (Some commands are inconsistent with how they are used on linux servers)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
ldb - RocksDB Tool commands MUST specify --db=<full_path_to_db_directory> when necessary commands can optionally specify --env_uri=<uri_of_environment> or --fs_uri=<uri_of_filesystem> if necessary --secondary_path=<secondary_path> to open DB as secondary instance. Operations not supported in secondary instance will fail. The following optional parameters control if keys/values are input/output as hex or as plain strings: --key_hex : Keys are input/output as hex --value_hex : Values are input/output as hex --hex : Both keys and values are input/output as hex The following optional parameters control the database internals: --column_family=<string> : name of the column family to operate on. default: default column family --ttl with 'put','get','scan','dump','query','batchput' : DB supports ttl and value is internally timestamp-suffixed --try_load_options : Try to load option file from DB. Default to true if db is specified and not creating a new DB and not open as TTL DB. Can be set to false explicitly. --disable_consistency_checks : Set options.force_consistency_checks = false. --ignore_unknown_options : Ignore unknown options when loading option file. --bloom_bits=<int,e.g.:14> --fix_prefix_len=<int,e.g.:14> --compression_type=<no|snappy|zlib|bzip2|lz4|lz4hc|xpress|zstd> --compression_max_dict_bytes=<int,e.g.:16384> --block_size=<block_size_in_bytes> --auto_compaction=<true|false> --db_write_buffer_size=<int,e.g.:16777216> --write_buffer_size=<int,e.g.:4194304> --file_size=<int,e.g.:2097152> --enable_blob_files : Enable key-value separation using BlobDB --min_blob_size=<int,e.g.:2097152> --blob_file_size=<int,e.g.:2097152> --blob_compression_type=<no|snappy|zlib|bzip2|lz4|lz4hc|xpress|zstd> --enable_blob_garbage_collection : Enable blob garbage collection --blob_garbage_collection_age_cutoff=<double,e.g.:0.25> --blob_garbage_collection_force_threshold=<double,e.g.:0.25> --blob_compaction_readahead_size=<int,e.g.:2097152> Data Access Commands: put <key> <value> [--create_if_missing] [--ttl] get <key> [--ttl] batchput <key> <value> [<key> <value>] [..] [--create_if_missing] [--ttl] scan [--from] [--to] [--ttl] [--timestamp] [--max_keys=<N>q] [--start_time=<N>:- is inclusive] [--end_time=<N>:- is exclusive] [--no_value] delete <key> deleterange <begin key> <end key> query [--ttl] Starts a REPL shell. Type help for list of available commands. approxsize [--from] [--to] checkconsistency list_file_range_deletes [--max_keys=<N>] : print tombstones in SST files. Admin Commands: dump_wal --walfile=<write_ahead_log_file_path> [--header] [--print_value] [--write_committed=true|false] compact [--from] [--to] reduce_levels --new_levels=<New number of levels> [--print_old_levels] change_compaction_style --old_compaction_style=<Old compaction style: 0 for level compaction, 1 for universal compaction> --new_compaction_style=<New compaction style: 0 for level compaction, 1 for universal compaction> dump [--from] [--to] [--ttl] [--max_keys=<N>] [--timestamp] [--count_only] [--count_delim=<char>] [--stats] [--bucket=<N>] [--start_time=<N>:- is inclusive] [--end_time=<N>:- is exclusive] [--path=<path_to_a_file>] [--decode_blob_index] [--dump_uncompressed_blobs] load [--create_if_missing] [--disable_wal] [--bulk_load] [--compact] manifest_dump [--verbose] [--json] [--path=<path_to_manifest_file>] update_manifest [--update_temperatures] MUST NOT be used on a live DB. file_checksum_dump [--path=<path_to_manifest_file>] get_property <property_name> list_column_families create_column_family --db=<db_path> <new_column_family_name> drop_column_family --db=<db_path> <column_family_name_to_drop> dump_live_files [--decode_blob_index] [--dump_uncompressed_blobs] idump [--from] [--to] [--input_key_hex] [--max_keys=<N>] [--count_only] [--count_delim=<char>] [--stats] [--decode_blob_index] list_live_files_metadata [--sort_by_filename] repair [--verbose] backup [--backup_env_uri | --backup_fs_uri] [--backup_dir] [--num_threads] [--stderr_log_level=<int (InfoLogLevel)>] restore [--backup_env_uri | --backup_fs_uri] [--backup_dir] [--num_threads] [--stderr_log_level=<int (InfoLogLevel)>] checkpoint [--checkpoint_dir] write_extern_sst <output_sst_path> ingest_extern_sst <input_sst_path> [--move_files] [--snapshot_consistency] [--allow_global_seqno] [--allow_blocking_flush] [--ingest_behind] [--write_global_seqno] unsafe_remove_sst_file <SST file number MUST NOT be used on a live DB. |
Posted in RocksDB
Comments Off on RocksDB Tool
[HBase] security issue
error message: com.a.b.c.exception.BaseDAOException: org.apache.hadoop.hbase.security.AccessDeniedException: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user ‘Cowman’ (table=ABC, action=READ) in jvm conf. add “-DHADOOP_USER_NAME=webuser”
Posted in HBase, Java
Leave a comment
[HBase] convert long value to bytes
Java code:
1 2 3 4 5 6 |
public static void main(String[] args) { String stringValue = Bytes.toStringBinary(Bytes.toBytes(1532080782183l)); System.out.println(stringValue); Long longValue = Bytes.toLong(Bytes.toBytesBinary(stringValue)); System.out.println(longValue); } |
HBase shell command:
1 2 3 4 |
hbase(main):056:0> Bytes.toStringBinary(Bytes.to_bytes(1532080782183)) => "\\x00\\x00\\x01d\\xB7!{g" hbase(main):057:0> Bytes.toLong("\x00\x00\x01d\xB7!{g".to_java_bytes) => 1532080782183 |
Posted in HBase, Java
Leave a comment
[HBase] Filters not working for negative integers
[stackoverflow] HBase: Filters not working for negative integers Easility say: Since, Hbase has only BinaryComparators and not other ‘typed’ comparators, it fails to filter on Negative integers as it stores the 2’s compliment of the negative number. Further, the binary … Continue reading
Posted in HBase
Leave a comment
[HBase] get the scan result without specific cq
scan ‘TableName’, FILTER=>”QualifierFilter(!=,’binary:QUALIFY2′)”
Posted in HBase
Leave a comment
Install Hbase 5.9 in Mac OS X
Download hbase package tar.gz file from https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_package_tarball_59.html Untar tar.gz file edit conf/hbase-env.sh export JAVA_HOME={{JAVA_HOME Directory path}} edit conf/hbase-site.xml
1 2 3 4 5 6 7 8 9 10 11 12 |
<property> <name>hbase.rootdir</name> <value>file:///{{location}}/data</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>{{location}}/zookeeper</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>{{hostname}}</value> </property> |
start hbase service bin/start-hbase.sh run hbase shell bin/hbase shell stop hbase shell service bin/stop-hbase.sh
Posted in HBase, Mac
Leave a comment
HBase: put byte value
put “TableName”, “rowkey”, “cf:fieldname”, [0].pack(“N”) N => byte array put “TableName”, “rowkey”, “cf:fieldname”, [0].pack(“Q>”) Q => 64-bit unsigned, native endian => change endian to big endian
Posted in HBase
Leave a comment
[MySQL] Find in fixed string
Ref. MySQL 的 FIND_IN_SET函數 SELECT * FROM table WHERE FIND_IN_SET(ID, ‘2,5,6,7,8,9,11,21,33,45’)
Posted in MySQL
Leave a comment