微軟 C#的Array.Sort與Linq的OrderBy演算法大哉問 feat: chatgpt
問誰呢?問 ChatGpt啊…但對 不對呢?
不知道,不過可以想見底層核心元件不是三言兩語就代過的…我們平常用的很無腦的,其實都是前人種樹


註: 自己murmur的blog內容(就是當個版),請查證再使用
問誰呢?問 ChatGpt啊…但對 不對呢?
不知道,不過可以想見底層核心元件不是三言兩語就代過的…我們平常用的很無腦的,其實都是前人種樹
註: 自己murmur的blog內容(就是當個版),請查證再使用
Given an array nums
of size n
, return the majority element.
The majority element is the element that appears more than ⌊n / 2⌋
times. You may assume that the majority element always exists in the array.
Example 1:
Input: nums = [3,2,3] Output: 3
Example 2:
Input: nums = [2,2,1,1,1,2,2] Output: 2
Constraints:
n == nums.length
1 <= n <= 5 * 104
-109 <= nums[i] <= 109
Follow-up: Could you solve the problem in linear time and in O(1)
space?
我的第一版本:
public class Solution {
public int MajorityElement(int[] nums) {
if (nums.Length == 1)
return nums[0];
var max = nums.Length / 2;
var result = 0;
Dictionary<int, int> counter = new Dictionary<int, int>();
foreach(var n in nums)
{
if (counter.ContainsKey(n))
{
counter[n]++;
if (counter[n] > max)
{
max = counter[n];
result = n;
}
}
else
{
counter.Add(n, 1);
}
}
return result;
}
}
直覺想 就用 計數器去實作了,但這樣的算法
這麼做的時間複雜度為O(n),空間複雜度亦為O(n)
以下 都不是我寫的,純粹學習
看到一種一行的寫法,其意涵為majority element一定會通過中間的位置。那定義就會叫過半數吧XD
public class Solution
{
public int MajorityElement(int[] nums)
=>nums.GroupBy(x => x).Select(x => new { x.Key, c = x.Count()}).OrderByDescending(x => x.c).FirstOrDefault().Key;
}
寫法2
public class Solution {
public int MajorityElement(int[] nums) {
//return nums.ToList().GroupBy(x => x).OrderByDescending(g=>g.Count()).Select(y=>y.Key).FirstOrDefault();
Array.Sort(nums);
return nums[nums.Length/2];
}
}
Time complexity:
O(nlog(n))
最想瞭解:
public class Solution {
public int MajorityElement(int[] nums) {
int count = 0;
int? candidate = null;
foreach (int num in nums) {
if (count == 0) candidate = num;
count += (num == candidate) ? 1 : -1;
}
return candidate.Value;
}
}
ChatGPT版本
using System;
class Program
{
static int MajorityElement(int[] nums)
{
int count = 0;
int candidate = 0;
foreach (int num in nums)
{
if (count == 0)
candidate = num;
if (num == candidate)
count++;
else
count--;
}
return candidate;
}
static void Main(string[] args)
{
int[] nums = { 2, 2, 1, 1, 1, 2, 2 };
int majority = MajorityElement(nums);
Console.WriteLine("Majority element: " + majority);
}
}
說明如下:
Boyer-Moore投票算法(Boyer-Moore Voting Algorithm)是一種用於在一個具有大量元素的列表中查找多數元素的有效方法。該算法的時間複雜度為O(n),其中n是列表的大小。該算法是由Robert S. Boyer和J Strother Moore於1981年提出的。
算法的基本思想是假設列表中的一個元素為多數元素,然後進行遍歷,對於每個元素,如果與候選元素相同,就將候選元素的計數器加1,否則減1。如果計數器為0,則將候選元素更換為當前元素。最後剩下的候選元素就是多數元素。
Reference:
新增HackerRank筆記
Finished
Selected
Given an integer x
, return true
if x
is a
palindrome, and false
otherwise.
Example 1:
Input: x = 121 Output: true Explanation: 121 reads as 121 from left to right and from right to left.
Example 2:
Input: x = -121 Output: false Explanation: From left to right, it reads -121. From right to left, it becomes 121-. Therefore it is not a palindrome.
Example 3:
Input: x = 10 Output: false Explanation: Reads 01 from right to left. Therefore it is not a palindrome.
Constraints:
-231 <= x <= 231 - 1
My Answer:
public class Solution {
public bool IsPalindrome(int x) {
string xStr = x.ToString();
string reverseXStr = new string(xStr.Reverse().ToArray());
return xStr == reverseXStr;
}
}
註: 一時間會想直接拆字串一正一反比對,後來想一下好像有一個reverse的function可以用
看到Hard就來看看
Given two sorted arrays nums1
and nums2
of size m
and n
respectively, return the median of the two sorted arrays.
The overall run time complexity should be O(log (m+n))
.
Example 1:
Input: nums1 = [1,3], nums2 = [2] Output: 2.00000 Explanation: merged array = [1,2,3] and median is 2.
Example 2:
Input: nums1 = [1,2], nums2 = [3,4] Output: 2.50000 Explanation: merged array = [1,2,3,4] and median is (2 + 3) / 2 = 2.5.
Constraints:
nums1.length == m
nums2.length == n
0 <= m <= 1000
0 <= n <= 1000
1 <= m + n <= 2000
-106 <= nums1[i], nums2[i] <= 106
直覺寫法
public class Solution {
public double FindMedianSortedArrays(int[] nums1, int[] nums2) {
var merged = nums1.Concat(nums2);
var mergedArray = merged.OrderBy(a => a).ToArray();
if (mergedArray.Length % 2 > 0)
{
var median = mergedArray[(mergedArray.Length - 1) / 2];
return median;
}
else
{
var medianFirst = mergedArray[((mergedArray.Length) / 2)-1];
var medianLast = mergedArray[((mergedArray.Length) / 2)];
return ((double)medianFirst + (double)medianLast) / 2;
}
}
}
結果就這樣過了,到底為什麼是Hard?
看了某些人也是這樣寫
Complexity:
Time complexity: The time complexity is dominated by the sorting operation, which takes O(N log N) time, where N is the total number of elements in nums1
and nums2
.
Space complexity: The space complexity is determined by the size of the combined list nums
, which is O(N), where N is the total number of elements in nums1
and nums2
.
public class Solution {
public double FindMedianSortedArrays(int[] nums1, int[] nums2) {
List<int> nums = nums1.ToList();
nums.AddRange(nums2.ToList());
nums.Sort();
if(nums.Count % 2 == 1)
return nums[nums.Count/2];
return (nums[nums.Count/2] + nums[nums.Count/2 - 1])/2.0;
}
}
所以這題的難度感覺是如何 說明其時間複雜度吧XD
7月開始新計畫: Leet Code 2日一題(1、3丶5) 各找一個c#適合解決的LeetCode 題目來保持暑期醒腦
題目:https://leetcode.com/problems/lru-cache/
wiki: https://en.wikipedia.org/wiki/Cache_replacement_policies#LRU
Implement the LRUCache
class:
LRUCache(int capacity)
Initialize the LRU cache with positive size capacity
.int get(int key)
Return the value of the key
if the key exists, otherwise return -1
.void put(int key, int value)
Update the value of the key
if the key
exists. Otherwise, add the key-value
pair to the cache. If the number of keys exceeds the capacity
from this operation, evict the least recently used key.The functions get
and put
must each run in O(1)
average time complexity.
Example 1:
Input ["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"] [[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]] Output [null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2); lRUCache.put(1, 1); // cache is {1=1} lRUCache.put(2, 2); // cache is {1=1, 2=2} lRUCache.get(1); // return 1 lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3} lRUCache.get(2); // returns -1 (not found) lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3} lRUCache.get(1); // return -1 (not found) lRUCache.get(3); // return 3 lRUCache.get(4); // return 4
Constraints:
1 <= capacity <= 3000
0 <= key <= 104
0 <= value <= 105
2 * 105
calls will be made to get
and put
.LinkedList第一版本:
namespace LeetCode2023_Paul.LRUCache;
public class LRUCache
{
private Node? _rootNode;
private int _capacity;
public LRUCache(int capacity)
{
_rootNode = new Node();
_capacity = capacity;
}
private (int, Node) SearchKey(Node? currentNode, int key)
{
if (currentNode == null)
{
return (-1, null);
}
return currentNode.Key != key ? SearchKey(currentNode.Next, key) : (currentNode.Value, currentNode);
}
private int CountNode(Node? currentNode)
{
if (currentNode == null)
{
return 0;
}
return 1 + CountNode(currentNode.Next);
}
public int Get(int key)
{
var (value, foundNode) = SearchKey(_rootNode, key);
if (value != -1)
{
if (foundNode.Key != _rootNode.Key)
RemoveNode(foundNode);
AddNode(foundNode);
}
return value;
}
private void removeLast(Node currentNode)
{
if (currentNode.Next != null && currentNode.Next.Next == null)
{
currentNode.Next = null;
}
else if (currentNode.Next != null)
{
removeLast(currentNode.Next);
}
}
public void Put(int key, int value)
{
var (foundValue, foundNode) = SearchKey(_rootNode, key);
if (foundValue == -1)
{
if (CountNode(_rootNode.Next) == _capacity)
{
//remove last
removeLast(_rootNode);
}
Node newRootNode = null;
newRootNode = new Node() {Key = key, Value = value, Next = null};
AddNode(newRootNode);
}
else
{
RemoveNode(foundNode);
foundNode.Value = value;
foundNode.Prev = null;
AddNode(foundNode);
}
}
private void AddNode(Node node)
{
var after = _rootNode.Next;
_rootNode.Next = node;
node.Prev = _rootNode;
node.Next = after;
if (after != null)
after.Prev = node;
}
private void RemoveNode(Node node)
{
Node before = node.Prev;
Node after = node.Next;
before.Next = after;
if (after != null)
after.Prev = before;
}
}
public class Node
{
public int Key { get; set; }
public int Value { get; set; }
public Node? Prev { get; set; }
public Node? Next { get; set; }
}
使用LinkedList的寫法後(突然發現腦袋真的要很有結構跟圖像化的能力XD),奮戰許久終於通過了測試(可能我資質太差)
結果要提交的時候,壓測悲劇出現XD。這個 我們常常開發的時候會遇到的情況真的滿類似的
先求有,就上線,殊不知一個高峰或是沒想到情況就是會這樣發生了..,然後導致程式 crash或lock, timeout了. 。很有既視感 。
結果Leetcode網址直接給我噴 Time Limit Exceeded XD
後來仔細參考了一下定義。重新 調整了一下 C#的寫法,後來看了一些人的作法,覺得確實可以用很多c#的物件特性直接解決,改寫如下 :
using System;
using System.Collections.Generic;
using System.Linq;
namespace LeetCode2023_Paul.LRUCacheV1;
public class LRUCacheV2
{
private int _capacity;
private List<int> _keys;
private Dictionary<int, int> _cache;
public LRUCacheV2(int capacity)
{
_keys = new List<int>();
_cache = new Dictionary<int, int>();
_capacity = capacity;
}
public int Get(int key)
{
var value = -1;
if (_cache.ContainsKey(key))
{
value = _cache[key];
//有被抓到的key重新 add到後面,實現愈少用到的key會浮在第一個
_keys.Remove(key);
_keys.Add(key);
}
return value;
}
public void Put(int key, int value) {
if (_cache.ContainsKey(key))
{
_keys.Remove(key);
_keys.Add(key);
_cache[key] = value;
}
else
{
if (_cache.Keys.Count == _capacity)
{
//因為最少用到的Key總是在第一個,所以cache數滿了的話,就移掉第一個(這邊跟NodeList不太一樣,但是利用List的特性可以作一樣效果
//LRU的意思就是Least Recently Used, 相反則是LFU Frequently Used
var lruKey = _keys.FirstOrDefault();
_keys.Remove(lruKey);
_cache.Remove(lruKey);
}
_keys.Add(key);
_cache.Add(key, value);
}
}
}
哈,簡單很多,這邊C#實作上幾點重點:
Dictionary是無序的排列集合,不可以用來作dequeue(像他的last,first)會有問題
利用 List<int>作有序的管理,然後Dictionary的Key作KeyValue的Mapping
這樣是最快的,也不用自已串LinkedList作遞迴,非常的方便~~
與其使用 LInkedList在每次抓 node的時候作movement,不如直接remove重加來的直覺~!
我的網站在http的世界中許久 ,這陣子終於想來弄SSL環境了(因為 老婆要寫網址XD) 先前我記得是卡在PH …
以上就是庫存頁檔救回的一小段字 XD
無意見發現多年前設定的mariadb,密碼一點都沒有安全性,結果就被破解了
苦腦許久,發現我在 6月初有備份image,至少還救的回2022年以前的文章 Orz
目前我先 救到這邊…
後續故事跟被入侵綁架的故事再補上….
以下是phpmyadmin被瘋逛try 密碼的logs(從docker logs來的)
資料庫還給我留下警語
這陣子老婆想要寫網誌 (親子遊日後的心血來潮)
無奈各大平台包含xuite,以前的無名收的收,國外的新平台,又覺得好像不熟悉所以腦筋動到自架平台 ,剛好我有這個wordpress養在那邊
不過,因為想說要有一些公開內容,所以還是專業一點弄一下ssl
SSL申請事小,結果我在try ssh跟firewall簡稱(FW)的時候,為了檢查vm是否有fw
所以apt安裝了ufw,殊不知是惡夢的開始
先前因為常常用 ssh連線,不過這次我索性使用了gcp web SSH連過去,想說,應該是滿無腦的
也確實方便,webssh還整合了上傳下載功能,所以我可以很方便的上傳我將要用的SSL證書跟private key
(但上傳下載也其實就是ssh的指令啦)
結果我一開始裝完ufw後 ,一開始也好好的,繼續去檢查gcloud(不是vm哦)虛擬網路的防火牆
後來一重開VM後,就突然 webssh不進去了…,因為同時在調整gcp ssl的防火牆設定,所以我”以為”
我動了什麼設定造成了這個結果(盲點)
註:這段真的超笨的,一直鑽牛角尖在 外層的fw上,還加了全通的rule也不行..
後來我才知道 ,就是vm我因為裝了ufw,所以自己也有一層防火牆被打開了,所以外殼的就無法進入這個vm
這就有點像在vm中移除了網路界面卡,無法再透過網路的方式連進去了,以往有主機,還可以實體操作
之所以把gcp的設定問題排除主要也是我重新生成了一個新的vm,其實舊的image我也生成過,但生成後,重新掛”新”的獨立網路 還是連不進去,所以我用新的VM加新的網路後,用 我認知的外層 fw設定下,嗯可以,ssh的進去
所以我覺得應該還是在vm自已本身身上,猜想到我曾安裝ufw這一點後,我就試著設定ufw disable的語法到VM的開機指令去…想說碰碰運氣
在 VM的執行個體 編輯視窗..可以找到以下欄位
重啟動vm後 ,果然連進去了..恩 …自己搞自己..結案
註∶其實為了fw去安裝 ufw造成這個風波,在cloud的環境中,若能確認是內網的話(不對外),其實可以依賴google cloud的網路跟防火牆的設定就好(有界面萬歲),vm就分工上可以單純一點處理自己該做的事~就好
一直以來都知道.netcore可以開始跨平台運作在linux上,因此運作在 docker的container中也是可行的
不過最近因為要研究apigateway,需要模擬infra的架構 ,因此需要在本機環境buildup起來多台主機的模式
沒想到入坑玩了一下,就發現,其實眉眉角角也不少,但來來回回操作後,其實是相對會愈來愈熟悉
首頁,我solution的構成包含了以下幾層 (apigateway還可做loadbalance,再加個nginx就可以, 不是這次的重點)
1.Redis(多台 instance作counter的cache)
2.WebApp(.net core)
3.ApiGateway(.net core + ocelot.net sdk)
4.Consul(service discovery service 作ha)
這次主要研究的機制為 3跟4之間的協作機制
基本上,有了3,預設走configuration就可以手動配置 ,但若要能做到動態ha的話,就必須加入service discovery的機制,剛好ocelot框架有支援 consul(還有其他套,但預設consul)
為了方便 重建,這次採用 docker-compose的yaml來維護(以下情境的container -name 都一般化了,只需要區分角色就好)
version: "3"
services:
nginx:
build: ./nginx
container_name: core-app.local
ports:
- "80:80"
- "443:443"
depends_on:
- core-app-gateway1
# network_mode: "host"
networks:
- backend
consul1:
image: consul
container_name: node1
command: agent -server -bootstrap-expect=2 -node=node1 -bind=0.0.0.0 -client=0.0.0.0 -datacenter=dc1
# network_mode: "host"
networks:
- backend
consul2:
image: consul
container_name: node2
command: agent -server -retry-join=node1 -node=node2 -bind=0.0.0.0 -client=0.0.0.0 -datacenter=dc1
depends_on:
- consul1
# network_mode: "host"
networks:
- backend
consul3:
image: consul
hostname: discover-center
container_name: node3
command: agent -retry-join=node1 -node=node3 -bind=0.0.0.0 -client=0.0.0.0 -datacenter=dc1 -ui
ports:
- 8500:8500
depends_on:
- consul1
- consul2
#network_mode: "host"
networks:
- backend
redis:
container_name: myredis
image: redis:6.0.6
restart: always
ports:
- 6379:6379
privileged: true
command: redis-server /etc/redis/redis.conf --appendonly yes
volumes:
- $PWD/data:/data
- $PWD/conf/redis.conf:/etc/redis/redis.conf
networks:
- backend
core-app-discover-a1:
build: ./dotnetapp/dotnetapp-discover
hostname: webapp1
environment:
- HOSTNAME=http://0.0.0.0:40001
- SERVER_HOSTNAME=http://192.168.1.115
- SERVER_PORT=8500
- CLIENT_HOSTNAME=http://core-app-discover-a1
- CLIENT_PORT=40001
- SERVICE_NAME=core-app-discover-a
- RedisConnection=redis:6379,password=12345678,allowAdmin=true
depends_on:
- redis
- consul3
ports:
- "40001"
networks:
- backend
core-app-discover-a2:
build: ./dotnetapp/dotnetapp-discover
hostname: webapp2
environment:
- HOSTNAME=http://0.0.0.0:40001
- SERVER_HOSTNAME=http://192.168.1.115
- SERVER_PORT=8500
- CLIENT_HOSTNAME=http://core-app-discover-a2
- CLIENT_PORT=40001
- SERVICE_NAME=core-app-discover-a
- RedisConnection=redis:6379,password=12345678,allowAdmin=true
depends_on:
- redis
- consul3
ports:
- "40001"
networks:
- backend
core-app-discover-a3:
build: ./dotnetapp/dotnetapp-discover
hostname: webapp3
environment:
- HOSTNAME=http://0.0.0.0:40001
- SERVER_HOSTNAME=http://192.168.1.115
- SERVER_PORT=8500
- CLIENT_HOSTNAME=http://core-app-discover-a3
- CLIENT_PORT=40001
- SERVICE_NAME=core-app-discover-a
- RedisConnection=redis:6379,password=12345678,allowAdmin=true
depends_on:
- redis
- consul3
ports:
- "40001"
networks:
- backend
core-app-discover-b:
build: ./dotnetapp/dotnetapp-discover
hostname: webapp-b
environment:
- HOSTNAME=http://0.0.0.0:50001
- SERVER_HOSTNAME=http://192.168.1.115
- SERVER_PORT=8500
- CLIENT_HOSTNAME=http://core-app-discover-b
- CLIENT_PORT=50001
- SERVICE_NAME=core-app-discover-b
- RedisConnection=192.168.1.115:6379,password=xxxxxxx
depends_on:
- redis
- consul3
ports:
- "50001:50001"
networks:
- backend
core-app-discover-c:
build: ./dotnetapp/dotnetapp-discover
hostname: webapp-c
environment:
- HOSTNAME=http://0.0.0.0:50002
- SERVER_HOSTNAME=http://192.168.1.115
- SERVER_PORT=8500
- CLIENT_HOSTNAME=http://core-app-discover-c
- CLIENT_PORT=50002
- SERVICE_NAME=core-app-discover-c
- RedisConnection=192.168.1.115:6379,password=xxxxxxx
depends_on:
- redis
- consul3
ports:
- "50002:50002"
networks:
- backend
core-app-gateway1:
build: ./dotnetapp/dotnetapp-gateway
hostname: gateway1
environment:
- HOSTNAME=http://0.0.0.0:20001
- SERVER_HOSTNAME=http://192.168.1.115
- SERVER_PORT=8500
- CLIENT_HOSTNAME=http://192.168.1.115
- CLIENT_PORT=20001
- SERVICE_NAME=core-app-gateway1
depends_on:
- consul3
ports:
- 20001:20001
networks:
- backend
networks:
backend:
driver: bridge
以上yaml就會建立一個network讓 container之前可以用 hostname來通訊。
這次研究的過程,會發現一件事情就是若要綁service discovery,那麼每個末端的webapp都需要跟register center做註冊,並提供一個health接口。但是若要做到auto scale,那依賴docker-compose所做的scale就會綁定動態的host_name(有序號),因此在此不建議 用scale直接長n台容器,而是先用多個端點來建置(如我的a1、a2、a3三台)
這樣就可以指定 hostname跟port給 webapp,而不會被scaling的機制而無法抓到動態的port跟名稱
以下是webapp的dockerfile (gateway的打包方式差不多,就不貼了)
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
COPY ["dotnetapp-discover.csproj", "dotnetapp-discover/"]
RUN dotnet restore "dotnetapp-discover/dotnetapp-discover.csproj"
COPY . ./dotnetapp-discover
WORKDIR "/src/dotnetapp-discover"
RUN dotnet build "dotnetapp-discover.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "dotnetapp-discover.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "dotnetapp-discover.dll"]
為了計數器功能與多台sharing cache,因此這次 還 模擬了一下連redis
而redis的connection string 可以透過 environment variable 注入進來(docker-compose可以另外用.env的檔案來管理不同環境,有興趣可以看 官方文件),而在.netcore的c#程式中,要抓到docker的參數的話可以使用GetEnvironmentVariable來取得設定值
public static IServiceCollection AddCacheService(this IServiceCollection services)
{
var redisConnection = Environment.GetEnvironmentVariable("RedisConnection");
#if DEBUG
redisConnection = "192.168.1.115:6379,password=xxxxxx,allowAdmin=true";
#endif
services.AddSingleton<IConnectionMultiplexer>(
ConnectionMultiplexer.Connect(redisConnection)
);
return services;
}
註∶為了反覆測試,才加上allow admin,不然這個開關是很危險的,可以直接flush掉你的所有快取(清空)
Redis記錄的Log型態
ApiGateway跟WebApp我都有跟Consul註冊Service Discovery
因此這次主要的寫法也是follow consul client sdk的套件,Consul的註冊管理上,同一個Name的會被Grouping起來,ID要給不一樣的,若ServiceName都不一樣,就算是一樣的程式,還是會被視為不同節點,這樣會無法作HA的判斷
static void RegisterConsul(string serviceName, string serverHostName , string serverPort, string clientHostName , string clientPort, IHostApplicationLifetime appLifeTime)
{
using var consulClient = new Consul.ConsulClient(p =>
{ p.Address = new Uri($"{serverHostName}:{serverPort}"); });
var registration = new AgentServiceRegistration()
{
Check = new AgentServiceCheck()
{
Interval = TimeSpan.FromSeconds(10),
HTTP = $"{clientHostName}{(clientPort == string.Empty ? "" : $":{clientPort}")}/health",
Timeout = TimeSpan.FromSeconds(5),
Method = "GET",
Name = "check",
},
ID = Guid.NewGuid().ToString(),
Name = serviceName,
Address = Dns.GetHostName(),
Port = int.Parse(clientPort),
};
try
{
consulClient.Agent.ServiceRegister(registration).Wait();
}
catch (Exception e)
{
Console.WriteLine(e);
}
//web服務停止時,向Consul解註冊
consulClient.Agent.ServiceDeregister(registration.ID).Wait();
}
以上function可以在program.cs的startup時期先進行註冊,而這個function所有的參數一樣可以來自於 docker-compose
範例∶
在webapp的startup可以使用以下語法就長出health接口
builder.Services.AddHealthChecks();
app.MapHealthChecks("/health", new HealthCheckOptions()
{
ResultStatusCodes =
{
[Microsoft.Extensions.Diagnostics.HealthChecks.HealthStatus.Healthy] = StatusCodes.Status200OK,
[Microsoft.Extensions.Diagnostics.HealthChecks.HealthStatus.Degraded] = StatusCodes.Status200OK,
[Microsoft.Extensions.Diagnostics.HealthChecks.HealthStatus.Unhealthy] = StatusCodes.Status503ServiceUnavailable
}
});
但還是要視服務加上MapHealthChecks的ResultStatus的設定,以consul來說,就是回200就好了,若沒有加上這個設定,其實health的回應並不是200,所以會被視為失敗
正常情況打 health的接口
最後就是ocelot的json設定,傳統的Ocelot自已就有Load-Balance機制,因此若有多台Downstream的服務,本來就可以用以下設定去管理服務
不過這樣做的缺點就是若隨時要增減 容器/主機時,都要再去重新配置設定檔。造成主機可能的reset與維運上的難度,因此透過Service Discovery的好處就浮上了臺面!以下的配置,只需要注意,不用定義DownStream的Host,只要給一個ServiceName,這個ServiceName要跟WebApp在Startup的時候註冊的一樣。並在最後GloablConfiguration有註冊Service DiscoveryProvider的設定,就可以打通ApiGateway跟Service Discovery之間的便利機制。
{
"Routes": [
{
"DownstreamPathTemplate": "/test",
"DownstreamScheme": "http",
"UpstreamPathTemplate": "/newapi/get",
"UpstreamHttpMethod": [
"Get"
],
"ServiceName": "core-app-discover-a",
"LoadBalancerOptions": {
"Type": "LeastConnection"
}
},
{
"DownstreamPathTemplate": "/test/list",
"DownstreamScheme": "http",
"UpstreamPathTemplate": "/newapi/list",
"UpstreamHttpMethod": [
"Get"
],
"ServiceName": "core-app-discover-a",
"LoadBalancerOptions": {
"Type": "LeastConnection"
}
},
{
"DownstreamPathTemplate": "/test",
"DownstreamScheme": "http",
"DownstreamHttpMethod": "DELETE",
"UpstreamPathTemplate": "/newapi/reset",
"UpstreamHttpMethod": [
"Get"
],
"ServiceName": "core-app-discover-a"
},
{
"Key": "api1",
"DownstreamPathTemplate": "/test/list",
"DownstreamScheme": "http",
"UpstreamPathTemplate": "/newapi/list1",
"UpstreamHttpMethod": [
"Get"
],
"ServiceName": "core-app-discover-a"
},
{
"Key": "api2",
"DownstreamPathTemplate": "/test/list",
"DownstreamScheme": "http",
"UpstreamPathTemplate": "/newapi/list2",
"UpstreamHttpMethod": [
"Get"
],
"ServiceName": "core-app-discover-a"
}
],
"Aggregates": [
{
"RouteKeys": [
"api1",
"api2"
],
"UpstreamPathTemplate": "/newapi/listall"
}
],
"GlobalConfiguration": {
"BaseUrl": "http://0.0.0.0:20001",
"ServiceDiscoveryProvider": {
"Scheme": "http",
"Host": "192.168.1.115",
"Port": 8500,
"Type": "PollConsul",
"PollingInterval": 100
},
"QoSOptions": {
"ExceptionsAllowedBeforeBreaking": 0,
"DurationOfBreak": 0,
"TimeoutValue": 0
},
"LoadBalancerOptions": {
"Type": "LeastConnection",
"Key": null,
"Expiry": 0
},
"DownstreamScheme": "http",
"HttpHandlerOptions": {
"AllowAutoRedirect": false,
"UseCookieContainer": false,
"UseTracing": false
}
}
}
上述的ApiGateway一樣用DockerFile把專案包成Image
docker-compose up -d –force-recreate
會看到一次起來這麼多服務
http://localhost:20001/newapi/get(透過api gateway打到downstream的service)
這邊多請求幾次會發現machineName一直在webapp1~3跳來跑去,採用load-balance的機制,ocelot會採用你設定的策略進行請求分派
ApiGateway有Aggregation的功能,就是可以把多個Api請求Combine在一起,讓請求變成1次,減少頻寬的消耗,這邊我也做了配置,來試看看吧
出於Ocelot.json的設定檔段落:
透過api gateway的組合讓 多個 Api可以一次 回傳
最後,打了很多次請求後,來總覽一下所有的請求摘要(程式是自已另外撈Redis的Request記錄來統計)
http://localhost:20001/newapi/list(透過api gateway打到down stream的service)
嘗試Docker Stop某一台 Discover-a1的服務,Consul DashBoard馬上就看的到紅點了!
這個時候再去打 之前load-balance的端點http://localhost:20001/newapi/get,會發現machineName會避開stop掉的服務只剩下 webapp1跟webapp3在輪循,因此做到了後端服務品質的保護
小插曲
ID不小心用成了Guid.New,所以會導致重新啟動就又拿到新的ID,造成了多一個Instance。但舊的也活了,因為他們的HealthCheck是一樣的
其實ApiGateway的議題,從Infra的角度上,可以玩的東西還有很多。
像是他也支援RateLimiting, QualityOfService(Polly) , 服務聚合(Aggregation)等機制,都可以從他們的官方文件看到設定方式,算是配置上不算太複雜。
不過 Ocelot在.net這邊發展一陣子後,近一年好像停更了!取而代之的好像是其他語言寫的更強的Apigateway: Envoy。
Envoy資源
https://www.envoyproxy.io/docs/envoy/latest/intro/what_is_envoy
https://www.netfos.com.tw/Projects/netfos/pages/solution/plan/20210111_plan.html
若只是要輕量級的,或是想參考其架構設計的,倒是可以翻github出來看看~
Github∶ https://github.com/ThreeMammals/Ocelot
若要拿去玩的,請把Ip, Password自行調整一下
https://www.uj5u.com/net/232249.html
ERROR [HPM] Error occurred while proxying request localhost:3000/auth/login to https://127.0.0.1:5001/ [DEPTH_ZERO_SELF_SIGNED_CERT] (https://nodejs.org/api/errors.html#errors_common_system_errors)
解決方法
https://dev.to/manizm/nuxt-ignore-ssl-issues-for-axios-proxy-development-only-381l