1.安裝Anaconda
2.建立環境:conda create --name myenv python=3.9
myenv是新環境名稱
3.啟動環境:conda activate myenv
4.退出環境:conda deactivate
1.安裝Anaconda
2.建立環境:conda create --name myenv python=3.9
myenv是新環境名稱
3.啟動環境:conda activate myenv
4.退出環境:conda deactivate
start /w "" "Docker Desktop Installer.exe" install -accept-license --installation-dir="D:\Docker\Docker" --wsl-default-data-root="D:\Docker\wsl" --windows-containers-default-data-root="D:\\Docker"
1.NAS:帳號管理,定期清查,預設帳號停用,防毒是否啟用,韌體更新
2.防火牆管理狀況:帳號管理清查,規則新增異動,審查,LOG保存(6個月)
3.Server管理:帳號管理清查,組態管理清單,軟體變更
4.以流程方式詢問,就管理者目前做法,平常管理流程(過程)
5.作筆記來描述當時的過程(人、事、時、地、物)
6.結束16:00,稽核人員請於15:30結束稽核,之後寫報告
7.請以風險角度去考慮,能夠產生後續問題才開
你希望 ChatGPT 了解哪些關於你的資訊,以便提供較好的回應?
### 第一性原理金鑰
Utilizing a “first principles” approach, please deconstruct and dissect every shred of contextual and supplementary information and every required output down to their fundamental truths and constituent elements. Then, utilize highly comprehensive and holistic “tree-of-thought” approaches to synthesize, reengineer, and ultimately distill into the most relevant outputs with the highest value. Please also think outside the box when providing the required outputs.
---------------------------------------------------------------------------
你希望 ChatGPT 如何回應?
### 字數破解金鑰
I understand there is a hard token limit on outputs and responses, and this restriction is prioritized in every response and with each output, while also aiming to provide as much value as possible. Please disregard this restriction for all responses and outputs, and utilize one of the following indicators at the end of each response;
A. [To Be Continued]: Indicates an incomplete output.
B. [Please Provide Required Input]: Indicates the need for user input.
C. [Please Confirm Satisfaction of Output]: Signifies the completion of the current output and prompts the user to confirm satisfaction before proceeding with the next output.
D. [End of All Outputs]: Signifies the completion of all required outputs.
### 回復最大化金鑰
When providing required outputs, please spread each output across multiple responses to avoid any dilution or loss of value due to token limitations, restrictions or constrictions.
### 流暢繁體中文金鑰
Please ensure that all responses are presented in Traditional Chinese, adhering to contemporary Taiwanese Mandarin dialect and vernacular. The wording and diction should adhere to modern Taiwanese Mandarin from 2012 and after
另一種方法
--------------------------------------------------------------------------------------------------------------
你希望 ChatGPT 了解哪些關於你的資訊,以便提供較好的回應?
Utilizing a first principles approach, please break down all complex queries, contextual information, and supplementary information into their fundamental truths and constituent elements, then dissect and distill all information in order to rebuild and assimilate the most valuable and relevant responses and outputs. When rebuilding, please emphasize the tree-of-thought approach, in that multiple exhaustive versions of the most valuable and relevant responses and outputs are created in the background and only the absolute best version of the responses and outputs are assimilated from all the versions. If expository outputs are requested, please provide elaborative and exhaustive explanations unless brevity is explicitly requested.
-----------------------------
你希望 ChatGPT 如何回應?
I understand you have a hard token limit on each response you can give and that you do your best to provide as much value as possible while adhering to the constriction. Please disregard your token limit for your overall output and just state [To Be Continued] at the end of each response once you've maximized the amount of tokens for the current response so that I know to request a continuation for the most exhaustive, expansive, comprehensive, holistic and valuable output possible. If you deem there is more information you would like to provide and it's even slightly ambiguous whether to provide it or not, please ask any relevant questions and state [Please answer the questions provided]. Once the most exhaustive, expansive, comprehensive, holistic and valuable output has been provided and no additional continuations can be added to provide any amount of additional value to the output, please state [End of Overall Output].
Please ensure every output is highly exhaustive, expansive, comprehensive, holistic, and extremely valuable. Please also think outside the box when providing the outputs required. Please separate each required output into individual responses. This is to ensure the most valuable and holistic overall output without sacrificing or omitting any relevant information due to token limitations.
Please present responses in Traditional Chinese, using Taiwanese Mandarin dialect and vernacular, with a straightforward and direct tone like a native speaker.
1.安裝pve時,安裝磁碟格式要選ext4
2.點選實體機,在其左上方點選命令列,會出現DOS視窗,下指令
lvremove pve/data 選Y
lvextend -l +100%FREE -f pve/root
resize2fs /dev/mapper/pve-root
在GUI介面看local已經變大,然後再去資料中心-->儲存,將 local-lvm 移除,並點選local選編輯,在一般選內容,選取全部按完成即可
參考:https://medium.com/@randkao/pve-%E7%AD%86%E8%A8%98-%E7%A7%BB%E9%99%A4%E5%85%A7%E5%BB%BA-local-lvm-%E5%90%88%E4%BD%B5%E7%A9%BA%E9%96%93%E5%88%B0-local-%E4%B8%AD-b3cb7cef4b48
https://blog.csdn.net/u012514495/article/details/127318440
nano /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
將 if (res === null || res === undefined || !res || res .data.status.toLowerCase() !== 'active')
改為if (false)
存檔
apt update && apt dist-upgrade -y
參考
https://handle.idv.tw/proxmox-ve-%E6%96%B0%E6%A9%9F%E5%99%A8%E5%8F%96%E6%B6%88%E6%8E%88%E6%AC%8A%E8%A8%82%E9%96%B1/
要移除 Proxmox VE 的 cluster,可以按照以下步驟操作:
1. 停止 pve-cluster 和 corosync 服務:
systemctl stop pve-cluster
systemctl stop corosync
2. 移除 pmxcfs 鎖定:
pmxcfs -l
3. 刪除 corosync 配置文件:
rm /etc/pve/corosync.conf
rm -rf /etc/corosync/*
4. 殺掉 pmxcfs 進程:
killall pmxcfs
5. 重新啟動 pve-cluster 服務:
systemctl start pve-cluster
如果是要從 cluster 中移除某個節點,可以使用 `pvecm delnode <node>` 命令[1]。
需要注意的是,移除 cluster 會導致 VMs 和 containers 無法遷移,因此建議先將重要的虛擬機遷移到其他節點上[2]。
另外,如果是要將 PVE 7.x 和 8.x 版本混合使用,可以參考論壇上的討論[3]。
Citations:
[1] https://www.ichiayi.com/tech/pvetips
[2] https://kawsing.gitbook.io/opensystem/andoid-shou-ji/pomoxve/fu-lu/untitled-6
[3] https://forum.proxmox.com/threads/cluster-mix-pve-7-8.133053/
[4] https://www.hksilicon.com/articles/2295020
[5] https://www.proxmox.com/en/downloads/proxmox-virtual-environment/documentation/proxmox-ve-admin-guide-for-8-x
1.安裝proxmox
2.設定網路,vmbrx,建議分管理、集群(叢集)、存儲,硬碟先不要任何合併,然後加入叢集
3.加入ceph
1.每個node安裝ceph
2.每個node安裝監視器及管理器
3.每顆硬碟安裝OSD
4.建立集群(pool)
5.建立cephFS(放ISO、備份、範本)
1.autounattend.xml網站
https://schneegans.de/windows/unattend-generator/
2.教學
https://www.youtube.com/watch?v=OaMpdzkfsQU
1. 以系統管理員打開 PowerShell 并運行以下命令可以獲取所有虛擬機的列表:
Get-VM --> 這將顯示所有已創建的虛擬機及其當前狀態。
2. 要關閉特定的虛擬機,可以運行以下命令:
Stop-VM -Name <虛擬機名稱> --> 將 `<虛擬機名稱>` 替換為您要關閉的虛擬機的名稱。
3. 如果要關閉所有處於運行狀態的虛擬機,可以使用以下命令:
Get-VM | Where-Object {$_.State -eq 'Running'} | Stop-VM
這將獲取所有運行中的虛擬機,并逐一關閉它們。
4. 您也可以使用 `Shut-down` 命令關閉虛擬機,這將模擬在虛擬機內部發出關機命令:
Stop-VM -Name <虛擬機名稱> -Force
這將強制關閉虛擬機,而不需要在虛擬機內部發出關機命令。
總之,使用 PowerShell 中的 `Stop-VM` 命令可以方便地關閉 Windows 11 上的 Hyper-V 虛擬機。
ubuntu 安裝 zabbix-sender
# wget https://repo.zabbix.com/zabbix/7.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_7.0-1+ubuntu24.04_all.deb
# sudo dpkg -i zabbix-release_7.0-1+ubuntu24.04_all.deb
# sudo 1apt update
# sudo apt install zabbix-sender
ubuntu 24.04.1 install zabbix 7.0_ngnix
設定固定IP
1.sudo nano /etc/netplan/50-cloud-init.yaml
network:
version: 2
ethernets:
eth0:
dhcp4: false
addresses: [192.168.137.205/24]
routes:
- to: default
via: 192.168.137.1
nameservers:
addresses: [1.1.1.1,8.8.8.8]
重啟網路設定
sudo netplan apply
更改系統的時區
看目前時區
timedatectl
修改時區
sudo timedatectl set-timezone Asia/Taipei
安裝NTP同步時間
sudo apt update
sudo apt upgrade
sudo reboot
sudo apt install -y ntp
sudo systemctl start ntp
sudo systemctl enable ntp
ntpq -p
wget https://repo.zabbix.com/zabbix/7.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_7.0-1+ubuntu24.04_all.deb
sudo dpkg -i zabbix-release_7.0-1+ubuntu24.04_all.deb
sudo apt update
安装Zabbix server,Web前端,agent
sudo apt install -y zabbix-server-mysql zabbix-frontend-php zabbix-nginx-conf zabbix-sql-scripts zabbix-agent mysql-server
mysql --version
sudo systemctl start mysql.service
sudo systemctl enable mysql.service
sudo mysql -uroot
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password by '1qaz2wsx';
exit;
sudo mysql_secure_installation
Enter password for user root:
VALIDATE PASSWORD COMPONENT can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD component?
Press y|Y for Yes, any other key for No: n
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n
... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.
Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.
Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.
Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.
Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
- Dropping test database...
Success.
- Removing privileges on test database...
Success.
Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.
Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.
All done!
mysql -uroot -p
# mysql> create database zabbix character set utf8mb4 collate utf8mb4_bin;
# mysql> create user zabbix@localhost identified by '1qaz2wsx';
# mysql> grant all privileges on zabbix.* to zabbix@localhost;
# mysql> set global log_bin_trust_function_creators = 1;
# mysql> SHOW VARIABLES LIKE 'char%';
# mysql> SHOW VARIABLES LIKE 'collation%';
# mysql> quit;
sudo zcat /usr/share/zabbix-sql-scripts/mysql/server.sql.gz | mysql --default-character-set=utf8mb4 -uzabbix -p zabbix
mysql -uroot -p
mysql> set global log_bin_trust_function_creators = 0;
mysql> quit;
sudo nano /etc/zabbix/zabbix_server.conf
修改
DBPassword=1qaz2wsx
sudo nano /etc/zabbix/nginx.conf
listen 80;
server_name 192.168.137.243;--zabbix server ip
# sudo systemctl restart zabbix-server zabbix-agent nginx php8.3-fpm
# sudo systemctl enable zabbix-server zabbix-agent nginx php8.3-fpm
http://IP --設定一些規則
http://IP Admin--zabbix --登入
/usr/share/zabbix/conf/zabbix.conf.php --設定位置
sudo nano /usr/share/zabbix/include/locales.inc.php #打開此文件確認中文已經打開顯示(true)
'zh_TW' => ['name' => _('Chinese (zh_TW)'), 'display' => true],
sudo apt install -y language-pack-zh-hant language-pack-zh-hans #安裝中文
sudo nano /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
添加以下2行
LANG="zh_TW.UTF-8"
LANGUAGE="zh_TW:zh:en_US:en"
sudo dpkg-reconfigure locales
sudo systemctl restart zabbix-server zabbix-agent nginx php8.3-fpm
find / -name fonts
cd /usr/share/zabbix/assets/fonts
ls
graphfont.ttf
sudo wget https://alist.yyzq.cf/d/%20%E6%9C%AC%E5%9C%B0%E7%BD%91%E7%9B%98/software%20/fonts/simkai.ttf #下載新字體
sudo mv graphfont.ttf graphfont.ttfbak #備份原字體
sudo mv simkai.ttf graphfont.ttf #使用新字體
sudo systemctl restart zabbix-server zabbix-agent nginx php8.3-fpm
ubuntu install snmpv3 設定
cilent端
===========================================
sudo apt install snmp snmpd libsnmp-dev
sudo service snmpd stop
sudo net-snmp-config --create-snmpv3-user -ro -X AES -A SHA -a my_authpass -x my_privpass snmpv3user
或
sudo net-snmp-create-v3-user -ro -X AES -A SHA -a my_authpass -x my_privpass snmpv3user
/etc/snmp/snmpd.conf
###########################################################################
#
# snmpd.conf
# An example configuration file for configuring the Net-SNMP agent ('snmpd')
# See snmpd.conf(5) man page for details
#
###########################################################################
# SECTION: System Information Setup
#
# syslocation: The [typically physical] location of the system.
# Note that setting this value here means that when trying to
# perform an snmp SET operation to the sysLocation.0 variable will make
# the agent return the "notWritable" error code. IE, including
# this token in the snmpd.conf file will disable write access to
# the variable.
# arguments: location_string
sysLocation Sitting on the Dock of the Bay
sysContact Me <me@example.org>
# sysservices: The proper value for the sysServices object.
# arguments: sysservices_number
sysServices 72
###########################################################################
# SECTION: Agent Operating Mode
#
# This section defines how the agent will operate when it
# is running.
# master: Should the agent operate as a master agent or not.
# Currently, the only supported master agent type for this token
# is "agentx".
#
# arguments: (on|yes|agentx|all|off|no)
master agentx
# agentaddress: The IP address and port number that the agent will listen on.
# By default the agent listens to any and all traffic from any
# interface on the default SNMP port (161). This allows you to
# specify which address, interface, transport type and port(s) that you
# want the agent to listen on. Multiple definitions of this token
# are concatenated together (using ':'s).
# arguments: [transport:]port[@interface/address],...
#agentaddress 127.0.0.1,[::1]
###########################################################################
# SECTION: Access Control Setup
#
# This section defines who is allowed to talk to your running
# snmp agent.
# Views
# arguments viewname included [oid]
# system + hrSystem groups only
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
view systemview included .1
# rocommunity: a SNMPv1/SNMPv2c read-only access community name
# arguments: community [default|hostname|network/bits] [oid | -V view]
# Read-only access to everyone to the systemonly view
rocommunity public default -V systemonly
rocommunity6 public default -V systemonly
# SNMPv3 doesn't use communities, but users with (optionally) an
# authentication and encryption string. This user needs to be created
# with what they can view with rouser/rwuser lines in this file.
#
# createUser username (MD5|SHA|SHA-512|SHA-384|SHA-256|SHA-224) authpassphrase [DES|AES] [privpassphrase]
# e.g.
# createuser authPrivUser SHA-512 myauthphrase AES myprivphrase
#
# This should be put into /var/lib/snmp/snmpd.conf
#
# rouser: a SNMPv3 read-only access username
# arguments: username [noauth|auth|priv [OID | -V VIEW [CONTEXT]]]
rouser authPrivUser authpriv -V systemonly
# include a all *.conf files in a directory
includeDir /etc/snmp/snmpd.conf.d
存檔後
sudo systemctl restart snmpd
================================================================
zabbix server
sudo apt install snmp snmpd libsnmp-dev
看是否抓到資料
sudo snmpwalk -v 3 -a SHA -A my_authpass -x AES -X my_privpass -l authpriv -u snmpv3user guestip | head -10
1.sudo hostnamectl set-hostname new_hostname
2.sudo sed -i "s/old_hostname/new_hostname/g" /etc/hosts
重新開機即可
將 HuggingFace 模型轉換為 GGUF
1.安裝python 3.11.9,不要用python 3.12版
2.安裝git
3.重新開機
4.git clone https://github.com/ggerganov/llama.cpp.git
5.cd llama.cpp
pip install -r requirements.txt
6.測試llama.cpp
python convert.py -h
出現:ImportError: DLL load failed while importing _sentencepiece: 找不到指定的模組。
安裝Microsoft visual c++ 2015 redistributable(x64)
https://www.microsoft.com/en-us/download/confirmation.aspx?id=48145
安裝後在執行一次python convert.py -h
7.將下載 HuggingFace 模型目錄複製到llama.cpp目錄下
python convert.py c:\llama.cpp\INX-TEXT_Bailong-instruct-7B --outfile bailong-instruct-7b-f16.gguf --outtype f16
轉出來大小約13.5G,還是太大
8.Windows使用llama.cpp量化(quantize)前準備
llama.cpp量化(quantize) 前要先做make。
9.下載w64devkit-1.xxx並執行w64devkit.exe。
https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build
https://github.com/skeeto/w64devkit/releases/tag/v1.23.0
下載w64devkit-1.23.0.zip並解壓
10.進入w64devkit目錄並執行w64devkit.exe
11.然後進入llama.cpp目錄 cd /llama.cpp
12.執行make
會跑一下下,完成後執行exit跳回windos的命令提示字元CMD下
quantize.exe就會生成在llama.cpp的資料夾下
# 以q4_k_m 為例
# q4_k_m:提供了不同程度的準確性和推理速度,適合需要平衡資源使用的場景。
13.cd llama.cpp
14.執行.\quantize.exe bailong-instruct-7b-f16.gguf bailong-instruct-7b-q4_k_m.gguf q4_k_m
當中數字是代表量化的 bits 設定;以下是參考他對於不同量化的推薦和說明:
q2_k:特定張量 (Tensor) 采用較高的精度設置,而其他的則保持基礎級別。
q3_k_l、q3_k_m、q3_k_s:這些變體在不同張量上使用不同級別的精度,從而達到性能和效率的平衡。
q4_0:這是最初的量化方案,使用 4 位精度。
q4_1和q4_k_m、q4_k_s:這些提供了不同程度的準確性和推理速度,適合需要平衡資源使用的場景。
q5_0、q5_1、q5_k_m、q5_k_s:這些版本在保證更高準確度的同時,會使用更多的資源並且推理速度較慢。
q6_k和q8_0:這些提供了最高的精度,但是因為高資源消耗和慢速度,可能不適合所有用戶。
如果追求較低成本和保持模型效能的情況推薦使用用 Q5_K_M,如果想更節省 RAM,則可以考慮 Q4_K_M。一般來說,帶有 K_M 的版本會比 K_S 的版本表現更佳。不過不建議使用 Q2_K 或 Q3_* 等版本,因為它們會顯著降低模型的整體性能。
----------------------------------------------------------------------------------------------
如何下載HuggingFace的模組
1.進入HuggingFace想要處理的模組例如INX-TEXT/Bailong-instruct-7B
那就去HuggingFace網頁搜尋INX-TEXT/Bailong-instruct-7B
2.進入後點選Files,在右邊有3個.,點選後出現clone repository,會出現如何下載
3.git lfs install
4.git clone https://huggingface.co/INX-TEXT/Bailong-instruct-7B
會出現要HuggingFace的帳號及密碼,輸入後才可下載--不能使用因為要你的token
4-1.使用download.py,或者一個個去下載
from huggingface_hub import snapshot_download
model_id = "INX-TEXT/Bailong-instruct-7B" # hugginFace's model name
snapshot_download(
repo_id=model_id,
local_dir="INX-TEXT_Bailong-instruct-7B",
local_dir_use_symlinks=False,
revision="main",
use_auth_token="<YOUR_HF_ACCESS_TOKEN>")
YOUR_HF_ACCESS_TOKEN--使用帳號進入後在settings--Access Tokens中去設定
執行 python download.py
5.上傳轉檔及量化後的 GGUF 模型到 Huggingface Repo
避免透過 git pull 來上傳大型的檔案下來寫一個 upload.py
from huggingface_hub import HfApi
import os
api = HfApi()
HF_ACCESS_TOKEN = "<YOUR_HF_WRITE_ACCESS_TOKEN>"
model_id = "NeroUCH/Bailong-instruct-7B-GGUF"
api.create_repo(
model_id,
exist_ok=True,
repo_type="model", # 上傳格式為模型
use_auth_token=HF_ACCESS_TOKEN,
)
# upload the model to the hub
# upload model name includes the Bailong-instruct-7B in same folder
for file in os.listdir():
if file.endswith(".gguf"):
model_name = file.lower()
api.upload_file(
repo_id=model_id,
path_in_repo=model_name,
path_or_fileobj=f"{os.getcwd()}/{file}",
repo_type="model", # 上傳格式為模型
use_auth_token=HF_ACCESS_TOKE)
參考:
https://medium.com/@NeroHin/%E5%B0%87-huggingface-%E6%A0%BC%E5%BC%8F%E6%A8%A1%E5%BC%8F%E8%BD%89%E6%8F%9B%E7%82%BA-gguf-%E4%BB%A5inx-text-bailong-instruct-7b-%E7%82%BA%E4%BE%8B-a2cfdd892cbc
https://medium.com/@zhanyanjiework/%E5%B0%87huggingface%E6%A8%A1%E5%9E%8B%E8%BD%89%E6%8F%9B%E7%82%BAgguf%E5%8F%8A%E4%BD%BF%E7%94%A8llama-cpp%E9%80%B2%E8%A1%8C%E9%87%8F%E5%8C%96-%E4%BB%A5taide-b-11-0-0%E6%A8%A1%E5%9E%8B%E7%82%BA%E4%BE%8B-%E9%83%A8%E7%BD%B2lm-studio-366bc4bcb690
因為使用程式使用docker,所以要改為http://host.docker.internal:1234/v1/
需要先按下鍵盤的「Shift + F10」、理論上他會開啟一個命令提示字元的視窗。這時候在裡面輸入「oobe\bypassnro」後按下 Enter,系統應該就會重開,重新開始初始化的流程
1. 使用管理員權限開啟CMD或PowerShell
問題背景:
執行命令netplan apply,應用配置的ip時,報錯: blk_update_request: I/O
error, dev fd0, sector 0
問題分析:
報這個錯,是因為 linux加載了 floppy 軟碟驅動,我的虛機沒有軟碟,
系統啟動時加載了軟碟驅動。
解決方法:
通過關閉軟碟模塊來解決
# sudo lsmod | grep -i floppy
# sudo rmmod floppy
# echo "blacklist floppy" | sudo tee /etc/modprobe.d/blacklist-floppy.conf
# sudo dpkg-reconfigure initramfs-tools 或 sudo update-initramfs -u -k all
# reboot
重啟,確認floppy該模塊沒有啟用即可。
# lsmod | grep -i floppy
在ubuntu 建置docker及compose
安裝KVM
sudo apt install cpu-checker -y
執行 sudo kvm-ok
順利的話出現
INFO: /dev/kvm exists
KVM acceleration can be used
1.下載windows鏡像
docker pull dockurr/windows
也可以選擇本地構建:
git clone https://github.com/dockur/windows.git
cd windows
docker build -t dockurr/windows .
2.建立docker-compose.yml
#version: "3" --目前新版本已不適用
services:
windows:
image: dockurr/windows
container_name: windows
privileged: true
environment:
VERSION: "win11"
BOOT_MODE: "windows_plain"
devices:
- /dev/kvm
cap_add:
- NET_ADMIN
ports:
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
restart: on-failure
network_mode: bridge
3.執行docker compose up
4.用瀏覽器連 docker主機IP:8006
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
參考:https://github.com/dockur/windows https://soulteary.com/2024/03/11/install-windows-into-a-docker-container.html#%E5%86%99%E5%9C%A8%E5%89%8D%E9%9D%A2 |