'2020/07'에 해당되는 글 2건

  1. 2020.07.04 딥러닝을 활용한 자연어 처리 (1)
  2. 2020.07.03 가용성 다단계 웹 테스트 등록

딥러닝을 활용한 자연어 처리

푸닥거리 2020. 7. 4. 12:45

딥러닝

 비선형

 인공신경망

 tensorflow는 학습, keras는 구현 활용

 - tensor ( 다차원 배열, 변수 ) 의 flow

 영상처리, 신호처리

 RNN: 어휘, 구문 분석

 - 시계열 처리

 - 자연어 처리

 

  머신러닝: 자동완성기능, 예측, 문자 간 거리를 계산, 귀여운 강아지 vs 귀여운 바퀴벌레, wegiht

  - 회귀분석, 분류분석

  - 군집분석

  - 선형

 

 

https://www.anaconda.com/products/individual

 

 

 

1. NLTK 자연어 처리 패키지

 

텍스트 마이닝: 자연어에서 의미 있는 정보를 찾는 것, 비정형 문서 데이터로부터 문서별 단어의 행렬, 통찰, 의사결정을 지원, 말뭉치

 

문서: 비정형 데이터 -> Corpus: 저장된 문서 -> TermDocument Matrix: 구조화된 문서 -> 분석: 분류, 군집 분석, 연관, 감성 분석

 

자연어 처리 학습 주제, 선형 결합, 희소 행렬

- 텍스트 전처리

- 개수 기반 단어 표현

- 문서 유사도

- 토픽 모델링

- 연관 분석

- 딥러닝을 이용한 자연어 처리: RNN, LSTM

- 워드 임베딩: Word2vec 패키지

- 텍스트 분류: 스팸 메일 분류

- 태깅

- 번역

 

NLTK ( 영어권 자연어 처리 ),  KNLPy ( 한국어 자연어 처리 ) 패키지가 제공하는 주요 기능

- 말뭉치(corpus)

- 토큰 생성(tokenizing)

- 형태소 분석(morphological analysis): 어근 분석, 명사

- 품사 태깅(POS tagging)

 

말뭉치 다운로드

 

 

nltk.download("book")

 

 

 

형태소 분석

 

 

from nltk.tokenize import word_tokenize

 

->

 

from nltk.tokenize import RegexpTokenizer

 

ret = RegexpTokenizer("[\w]+") // 영문자 숫자에 해당하는 것만 토큰으로 만듬

 

ret.tokenize(emma[50:100])

 

 

 

형태소 분석의 예

 

- 어간 추출: Stemming

- 원형 복원: Lemmatizing

- 품사 태깅

 

 

 

 

 

 

어간 추출 vs 원형 복원

 

 

품사 태깅

 

 

 

Text 클래스, 한글 미지원

 

- 단어 개수 

- 소설 책 내 단어 개수, 6%

 

. 탭키

 

 

 

 

 

 

 

 

 

 

2. 한글 형태소 분석과 워드 클라우드

 

 

자연어 처리 용어

- 형태소

- 용언

- 어근

- 어미

- 자모

- 품사

- 어절 분류

- 불용어

- n-gram

 

 

형태소

 

KoNLPy: Korean NLP in Python

 

https://konlpy.org/

 

https://konlpy.org/en/latest/#api

 

https://konlpy.org/en/latest/morph/#comparison-between-pos-tagging-classes

 

 

 

https://docs.google.com/spreadsheets/d/1OGAjUvalBuX-oZvZ_-9tEfYD2gQe7hTGsgUpiiBSXI8/edit#gid=0

 

 

 

 

 

 

어절 -> 전처리 -> 분석 후보 생성 -> 결합 제약 검사 -> 분석 후보 선택 -> 형태소

 

형태소 분석 엔진

- KoNLPy

- KOMORAN

- HanNanum

- KoNLP: KoNLPy는 JPype1 패키지에 의존

 

https://www.oracle.com/java/technologies/

-> Java SE 8u251

 

 

 

 

 

 

형태소 분석기

 

 

Hannanum

- morphs

- nouns

- pos

 

Komoran

- morphs

- nouns

- pos

 

KKma

- morphs

- nouns

- pos

 

 

 

 

 

 

 

 

법률 말뭉치

 

 

 

 

 

 

 

 

 

 

 

http://coderby.com/forums/topic/jupyter-notebook-extension-%ec%a3%bc%ed%94%bc%ed%84%b0-%eb%85%b8%ed%8a%b8%eb%b6%81-%ed%99%95%ec%9e%a5%ed%8c%a9

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

워드 임베딩: 단어를 벡터로 표현

 

- 희소 표현
- 밀집 표현: 단어와 단어 거리 

- 워드 임베딩: 위 과정

 

Word2Vec

- CBOW: 주변 단어로 중간 단어 예측

- Skip-Gram: 중간 단어로 주변 단어 예측

 

pip install gensim

 

import nltk

nltk.download("book")

 

from nltk.book import *

nltk.corpus.gutenberg.fileids()

emma = nltk.corpus.gutenberg.raw("austen-emma.txt")

print(emma[:200])

 

from nltk.tokenize import sent_tokenize

sent_tokens = sent_tokenize(emma)

type(emma)

sent_tokens[10]

len(sent_tokens)

 

from nltk.tokenize import word_tokenize

print(word_tokenize(sent_tokens[10]))

 

from nltk.tokenize import RegexpTokenizer

ret = RegexpTokenizer("[\w]+")

print(ret.tokenize(sent_tokens[10]))

 

words = ["sending""cooking""files""lives""crying""dying"]

from nltk.stem import PorterStemmer

 

pst = PorterStemmer()

[pst.stem(w) for w in words]

 

from nltk.stem import LancasterStemmer

lst = LancasterStemmer()

[lst.stem(w) for w in words]

 

from nltk.stem.regexp import RegexpStemmer

rest = RegexpStemmer('ing')

[rest.stem(w) for w in words]

 

words = ["sending""cooking""files""lives""crying""dying"]

from nltk.stem.snowball import SnowballStemmer

sbst = SnowballStemmer("english")

[rest.stem(w) for w in words]

 

words3 = ["cooking""belives"]

 

lst = LancasterStemmer()

[lst.stem(w) for w in words3]

 

from nltk.stem.wordnet import WordNetLemmatizer

wl = WordNetLemmatizer()

[wl.lemmatize(w) for w in words3]

 

from nltk.tag import pos_tag

sent_tokens[10]

tagged_list = pos_tag(word_tokenize(sent_tokens[10]))

print(tagged_list)

 

nouns_list = [ t[0for t in tagged_list if t[1]=='NN' ]

nouns_list

 

ret = RegexpTokenizer("[a-zA-Z]{3,}")

emma_tokens = ret.tokenize(emma)

nouns_list = [ t[0for t in tagged_list if t[1]=='NN' ]

len(set(emma_tokens))

len(emma_tokens)

len(set(emma_tokens))/len(emma_tokens)

 

from nltk import Text

emma_text = Text(emma_tokens)

 

type(emma_text)

 

emma_text.plot(20)

emma_text.concordance("Emma", lines=5)

emma_text.similar("general")

 

emma_text.dispersion_plot(["Emma""Jane""Robert"])

 

ret = RegexpTokenizer("[a-zA-Z]{3,}")

emma_tokens = ret.tokenize(emma)

nouns_list = [ t[0for t in tagged_list if t[1]=='NN' ]

emma_text = Text(emma_tokens)

emma_text.plot(20)

 

from nltk import FreqDist

 

emma_tokens = pos_tag(emma_text)

stopwords = ["CHAPTER""End""Nothing"]

 

names_list = [t[0for t in emma_tokens if t[1]=="NNP" and t[0not in stopwords ]

emma_df_names = FreqDist(names_list)

 

emma_df_names

 

!pip install konlpy

 

from konlpy.tag import Hannanum

han = Hannanum()

text = "아름답지만 다소 복잡하기만 한 한국어는 전세계에서 13번째로 많이 사용되는 언어입니다."

han.analyze(text)

han.nouns(text)

pos_t = han.pos(text, ntags=22)

[ t[0for t in pos_t if t[1]=='PV' ]

 

!pip install wordcloud

 

print(r"Hello\nworld")

word_list = komoran.nouns("%r"%data[0:1000])

import matplotlib.pyplot as plt

%matplotlib inline

 

from wordcloud import WordCloud

wordc = WordCloud()

wordc.generate(text)

plt.figure()

plt.imshow(wordc, interpolation="bilinear")

 

!pip install jupyter_contrib_nbextensions && jupyter contrib nbextension install

 

wordc = WordCloud(background_color='white', max_words=20, font_path='c:/Windows/Fonts/malgun.ttf', relative_scaling=0.2)

wordc.generate(text)

 

plt.figure()

plt.imshow(wordc, interpolation="bilinear")

plt.axis('off')

 

from konlpy.corpus import kolaw

data = kolaw.open('constitution.txt').read()

from konlpy.tag import Komoran

komoran = Komoran()

word_list = komoran.nouns("%r"%data)

text = ' '.join(word_list)

wordcloud = WordCloud(background_color='white', max_words=20, font_path='c:/Windows/Fonts/malgun.ttf', relative_scaling=0.2)

wordcloud.generate(text)

plt.figure(figsize=(15,10))

plt.imshow(wordcloud, interpolation="bilinear")

plt.axis('off')

 

from PIL import Image

import numpy as np

img = Image.open("south_korea.png").convert("RGBA")

mask = Image.new("RGB", img.size, (255,255,255))

mask.paste(img,img)

mask = np.array(mask)

wordcloud = WordCloud(background_color='white', max_words=2000, font_path='c:/Windows/Fonts/malgun.ttf', mask=mask, random_state=42)

wordcloud.generate(text)

wordcloud.to_file("result1.png")

 

import requests

rss_url = "http://fs.jtbc.joins.com/RSS/economy.xml"

jtbc_economy = requests.get(rss_url)

from bs4 import BeautifulSoup

economy_news_list = BeautifulSoup(jtbc_economy.content, "xml")

link_list = economy_news_list.select("item > link")

 

from konlpy.tag import Kkma

Kkma = Kkma()

 

news = []

for link in link_list:

    news_url = link.text

    news_response = requests.get(news_url)

    news_soup = BeautifulSoup(news_response.content, "html.parser")

    news_content = news_soup.select_one("#articlebody > .article_content")

    news.append(list(filter(lambda word: len(word)>1, Kkma.nouns(news_content.text))))

 

!pip install -U gensim

 

from gensim.models import Word2Vec

model = Word2Vec(news, size=100, window=5, min_count=2, workers=-1)

model.wv.most_similar("부동산")

 

 

 

 

3.인공신경망

 

답이 있어야 함

 

분류, 회귀, 군집 (X, y가 없음)

 

인공지능 암흑기

1. 과적합

2. 지역최적값

3. 오차가 점점 줄어들어야 하나 줄어들지 않는 현상, wegiht 가 그대로, 학습이 안됨

 

의사결정나무, 선형 데이타

 

CNN, ImageNet Challenge 2012

 

얇은 인공지능: 딥러닝, 머신러닝 

-> 깊은 인공지능

 

 

인간의 뉴런 구조

 

 

인공 뉴런의 구조

 

 

weight: 학습을 통해서 얻어야 할 값

f: 활성화 함수, 함수 선택이 적절해야 함

 

대표적인 활성화 함수 종류: 뉴런이 다음 출력으로 내보내기 위해 사용

- Softmax: y를 합치면 1이 됨, 분류의 출력 체계에서 활용

- Sigmoid

- tanh(x)

- Binary step

- Gaussian

- ReLU

 

 

 

인공신경망: 입력층, 은닉층, 출력층

 

 

 

 

 

 

입력층: 입력하는 변수의 수

출력증: 

은닉층: 

 

다층 신경망(MLP, DNN): 은닉층이 여러개

 

 

 

 

DNN Layer

 

Hidden1: 뉴런의 수 50개, 활성화 함수->relu

Hidden2: 뉴런의 수 30개, 활성화 함수->relu

Output: 뉴런의 수 10개, 활성화 함수->softmax

 

손실 함수: 크로스엔트로피

옵티마이저: 경사하강법, 차이=미분, 기울기, 미분값이 작을수록 오차가 적음, 학습률: 0.001

배치 크기: 100, 

학습 횟수: 200회

 

- MAE

- MSE

- RMSE: 표준편차

 

 

MLPClassifier

 

import seaborn as sns

iris = sns.load_dataset("iris")

from sklearn.preprocessing import LabelEncoder

le = LabelEncoder()

le.fit(iris.species)

 

iris.species = le.transform(iris.species)

from sklearn.model_selection import train_test_split

iris_X = iris.iloc[:,:-1]

iris_y = iris.species # iris.iloc[:,-1]

train_X, test_X, train_y, test_y = train_test_split(iris_X, iris_y, test_size=0.3, random_state=1)

train_X.shape, test_X.shape

 

from sklearn.neural_network import MLPClassifier 

mlp = MLPClassifier(hidden_layer_sizes=(50,50,30), max_iter=500)

mlp.fit(train_X, train_y)

pred = mlp.predict(test_X)

pred

 

mlp.score(test_X, test_y)

 

 

 

https://archive.ics.uci.edu/ml/index.php

 

https://archive.ics.uci.edu/ml/datasets/wine+quality

 

https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/

 

 

MLClassifier를 이용한 winequality 데이터 등급 분류

 

#from IPython.core.display import display, HTML

#display(HTML("""<style>div."""))

 

import pandas as pd

redwine = pd.read_csv("winequality-red.csv", sep=";")

redwine.head()

redwine_X = redwine.iloc[:, :-1]

redwine_y = redwine.quality # redwine.iloc[:, -1]

 

from sklearn.model_selection import train_test_split

train_X, test_X, train_y, test_y = train_test_split(redwine_X, redwine_y, test_size=0.3, random_state=1)

 

from sklearn.neural_network import MLPClassifier

model = MLPClassifier(hidden_layer_sizes=(50,50,30), max_iter=500#, verbose=true)

 

model.fit(train_X, train_y)

 

model.score(test_X, test_y)

 

pred = model.predict(test_X)

 

pd.crosstab(test_y, pred)

 

 

 

 

Hadoop, Spark 

 

인공신경망 모형, 딥러닝 프레임워크, 정의 시 고려 사항

- 레이어의 수

- 뉴런의 수

- 활성화 함수

- 손실함수

- 옵티마이저

- 학습률

- 학습 횟수

- 배치크기

 

Scikit-learn MLPClassifier, 머신러닝 영역에 초점 VS Tensorflow DNNClassifier, 딥러닝에 초점

 

 

 

 

4. 케라스를 이용한 인공신경망 구현

 

 

 

 

Keras, https://keras.io/

- 유저가 손쉽게 딥 러닝을 구현할 수 있도록 도와주는 상위 레벨의 인터페이스

- 히든 레이어의 행렬 자동화

 

conda vs pip 로 텐서플로우 설치 시 서로 버전이 다름

 

 

Sequential model vs Functional API

 

dropout, 오버핏(overfit)을 줄임

 

 

 

 

conda install tensorflow

 

 

keras를 이용한 iris 데이터 종 분류

 

#from IPython.core.display import display, HTML

#display(HTML(

#"""<style>

#div.container { width:100% !import; } 

#div.CodeMirror {font-family: Consolas; font-size: 16pt;} 

#div.output { font-size:16pt; font_weight: bold;} 

#div.input { font-family; Consolas; font-size: 16pt; }

#div.prompt { min-width: 100px; }

#</style>

#"""))

 

import tensorflow # conda install tensorflow

#tensorflow.__version__

 

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

 

import seaborn as sns

 

iris = sns.load_dataset("iris")

 

iris_X = iris.iloc[:, :-1]

iris_y = iris.iloc[:, -1]

 

import pandas as pd

 

iris_onehot = pd.get_dummies(iris_y)

#iris_onehot.to_numpy()

 

model = Sequential()

 

model.add(Dense(4, activation="relu"))

model.add(Dense(50, activation="relu"))

model.add(Dense(50, activation="relu"))

model.add(Dense(30, activation="relu"))

model.add(Dense(3, activation="softmax"))

 

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) #metrics=["acc"])

 

from sklearn.model_selection import train_test_split

train_X, test_X, train_y, test_y = train_test_split(iris_X, iris_onehot, test_size=0.3)

 

train_X.to_numpy().shape, train_y.to_numpy().shape

 

model.fit( train_X.to_numpy(), train_y.to_numpy(), batch_size=50, epochs=200, verbose=1 )

 

model.predict(test_X)

 

model.evaluate(test_X, test_y)

 

 

 

 

 

 

import numpy as np

pred = np.argmax(model.predict(test_X), axis=1# 각 클래스별 확률을 출력하므로 argmax를 이용해서 가장 큰 값의 열 인덱스

 

pred # 예측한 값

 

np.argmax(test_y.to_numpy(), axis=1# 테스트 데이터의 정답

 

pd.crosstab(np.argmax(test_y.to_numpy(), axis=1), pred) # 교차 분류표

 

model.evaluate(test_X, test_y) 

 

 

 

Optimizer

- SGD

- RMSgrop

- Adagrad

- Adadelta

- Adam: 최적 값을 지나 좀 더 학습을 진행

- Adamax

- Nadam

 

Activation functions

- softmax

- elu

- selu

- softsign

- relu

- tanh

- sigmoid

- hard_sigmoid

 

Advanced Activation functions

- LeakyReLU

- PReLU

- ELU

- ThresholdedReLU

- Softmax: 출력층에서 사용

- ReLU: 영상처리

 

 

 

 

배치 정규화

 

불안정화가 일어나는 이유 - Internal Covariance Shift

 

분산이 0인 열은 학습에서 제외 시켜야 함, weight = 0

 

Dense, 은닉층 -> Dropout -> BatchNormalization -> Dense, 은닉층

 

 

 

 

손실함수

- mean_squared_error

- mean_absolute_error

 

 

 

keras를 이용한 winequality 데이터 등급 분류

 

import pandas as pd

import numpy as np

 

redwine = pd.read_csv("winequality-red.csv", sep=";")

 

redwine_X = redwine.iloc[:, :-1].to_numpy()

 

redwine_y = redwine.iloc[:, -1]

 

redwine_onehot = pd.get_dummies(redwine_y).to_numpy()

 

from sklearn.model_selection import train_test_split

train_X, test_X, train_y, test_y = train_test_split(redwine_X, redwine_onehot, test_size=0.3)

 

from tensorflow.keras.models import Sequential

model = Sequential()

 

from tensorflow.keras.layers import Input, Dense

 

model.add(Input(11))

model.add(Dense(50, activation="relu"))

model.add(Dense(50, activation="relu"))

model.add(Dense(30, activation="relu"))

model.add(Dense(6, activation="softmax"))

 

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

model.fit(train_X, train_y, batch_size=200, epochs=200, verbose=1)

 

import numpy as np

pred = np.argmax(model.predict(test_X), axis=1)

 

pred+3 # 등급이 3등급부터이므로 예측한 값을 보정해 줘야 함

 

pd.crosstab(np.argmax(test_y, axis=1)+3, pred+3# 교차 분류표

 

model.evaluate(test_X, test_y)

 

 

 

 

 

 

Callback: 학습 시 특정 조건이 되면 실행되는 객체

- ModelCheckPoint

- EarlyStopping

- LearningRateScheduler

- TensroBoard

- CSVLogger

 

 

import pandas as pd

import numpy as np

 

redwine = pd.read_csv("winequality-red.csv", sep=";")

 

redwine_X = redwine.iloc[:, :-1].to_numpy()

 

redwine_y = redwine.iloc[:, -1]

 

redwine_onehot = pd.get_dummies(redwine_y).to_numpy()

 

from sklearn.model_selection import train_test_split

train_X, test_X, train_y, test_y = train_test_split(redwine_X, redwine_onehot, test_size=0.3)

 

from tensorflow.keras.models import Sequential

model = Sequential()

 

from tensorflow.keras.layers import Input, Dense

 

model.add(Input(11))

model.add(Dense(50, activation="relu"))

model.add(Dense(50, activation="relu"))

model.add(Dense(30, activation="relu"))

model.add(Dense(6, activation="softmax"))

 

from tensorflow.keras.callbacks import ModelCheckpoint #, EarlyStopping

 

checkpoint = ModelCheckpoint(filepath="redwine-{epoch:03d}-{val_acc:.4f}.hdf5"# or H5 확장자

                             monitor="val_acc"# 모니터링 할 val_ 테스트 데이터 지정 필요 , validation_split=0.2 or validation_data

                             save_best_only=True# mode = 'auto', save_weight_only=False

                             verbose=1 # 로그를 자세히 , save_best_only=True

                            )

 

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

 

model.fit(train_X, train_y, 

          validation_data=(test_X, test_y), # 정답 데이타

          callbacks=[checkpoint],

          batch_size=200, epochs=200, verbose=1)

 

 

 

 

 

 

 

 

 

 

tensorflow install
anaconda prompt install -> 1.x
pip -> 2.0

in anaconda prompt

텐서플로우 설치 확인
conda list tensorflow

텐서플로우 삭제
conda remove tensorflow
conda remove tensorflow-base
pip uninstall tensorflow-estimator

pip로 텐서플로우 2.2.0 설치
pip install tensorflow==2.2.0
anaconda prompt install -> 1.x
pip -> 2.0

in anaconda prompt

텐서플로우 설치 확인
conda list tensorflow

텐서플로우 삭제
conda remove tensorflow
conda remove tensorflow-base
pip uninstall tensorflow-estimator

pip로 텐서플로우 2.2.0 설치
pip install tensorflow==2.2.0
pip install tensorflow-cpu

conda install tensorflow

 

윈도우 cmd 

pscp.exe model.py userid@ip-address:/home/userid/data/model.py

 

리눅스 터미널

python model.py &

 

ps -ef | grep python

 

윈도우 cmd 창에서

pscp userid@ip-address:/home/userid/data/...h5 model...h5

 

 

 

 

 

 

 

Early Stopping Callback

 

import pandas as pd

import numpy as np

 

redwine = pd.read_csv("winequality-red.csv", sep=";")

 

redwine_X = redwine.iloc[:, :-1].to_numpy()

 

redwine_y = redwine.iloc[:, -1]

 

redwine_onehot = pd.get_dummies(redwine_y).to_numpy()

 

from sklearn.model_selection import train_test_split

train_X, test_X, train_y, test_y = train_test_split(redwine_X, redwine_onehot, test_size=0.3)

 

from tensorflow.keras.models import Sequential

model = Sequential()

 

from tensorflow.keras.layers import Input, Dense

 

model.add(Input(11))

model.add(Dense(50, activation="relu"))

model.add(Dense(50, activation="relu"))

model.add(Dense(30, activation="relu"))

model.add(Dense(6, activation="softmax"))

 

from tensorflow.keras.callbacks import ModelCheckpoint #, EarlyStopping

 

checkpoint = ModelCheckpoint(filepath="redwine-{epoch:03d}-{val_acc:.4f}.hdf5"# or H5 확장자

                             monitor="val_acc"# 모니터링 할 val_ 테스트 데이터 지정 필요 , validation_split=0.2 or validation_data

                             save_best_only=True# mode = 'auto', save_weight_only=False

                             verbose=1 # 로그를 자세히 , save_best_only=True

                            )



from tensorflow.keras.callbacks import EarlyStopping

early_stopping = EarlyStopping(monitor="val_acc", patience=10)

 

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

 

model.fit(train_X, train_y, 

          validation_data=(test_X, test_y), # 정답 데이타

          callbacks=[checkpoint, early_stopping],

          batch_size=200, epochs=2000, verbose=1)

 

 

 

 

 

 

 

 

 

import tensorflow

 

import pandas as pd

import numpy as np

 

redwine = pd.read_csv("winequality-red.csv", sep=";")

 

redwine_X = redwine.iloc[:, :-1].to_numpy()

redwine_y = redwine.iloc[:, -1]

 

redwine_onehot = pd.get_dummies(redwine_y).to_numpy()

 

from sklearn.model_selection import train_test_split

train_X, test_X, train_y, test_y = train_test_split(redwine_X, redwine_onehot, test_size=0.3)

 

from tensorflow.keras.models import Sequential

model = Sequential()

 

from tensorflow.keras.layers import Input, Dense

 

model.add(Input(11))

model.add(Dense(50, activation="relu"))

model.add(Dense(50, activation="relu"))

model.add(Dense(30, activation="relu"))

model.add(Dense(6, activation="softmax"))

 

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])

model.load_weights("redwine-062-0.5521.hdf5")

model.evaluate(test_X, test_y)

 

 

 

 

 

 

 

5. RNN

 

 

 

저장한 모델 불러와 예측하기

 

CNN, 영상 필터 학습, 합성곱

 

 

 

 

 

 

 

RNN, 순환 신경망

 

 

 

 

 

이전에 학습했던 y 값

이전 y 값

 

- 이전 학습한 내용을 다음 학습 할 내용에 전달

 

 

양방향 순환 신경망

 

 

 

 

 

 

 

 

 

문맥을 예측해서 다음 단어 예측해보기

 

vocab_size 희소행렬, 한개의 문장을 가지고 여러개의 행을 만듬, index 1부터 시작 됨

 

Embedding

SimpleRNN

 

nltk, keras_preprocessing.text

 

형태소 분류

 

경마장에 있는 말이 뛰고 있다

-> 경마장에 있는 말이, 있는 말이, 말이 뛰고, ---

 

pad_sequences, padding='pre' 데이터의 앞을 0으로 채움

 

 

 

text = """경마장에 있는 말이 뛰고 있다\n

그의 말이 법이다\n

가는 말이 고와야 오는 말이 곱다\n"""

 

from keras_preprocessing.text import Tokenizer

t = Tokenizer()

t.fit_on_texts([text])

encoded = t.texts_to_sequences([text])[0]

 

vocab_size = len(t.word_index) + 1

print('단어 집합의 크기: %d' % vocab_size)

 

print(t.word_index)

 

sequences = list()

for line in text.split('\n'):

    encoded = t.texts_to_sequences([line])[0]

    for i in range(1len(encoded)):

        sequence = encoded[:i+1]

        sequences.append(sequence)

        

print('훈련 데이터의 개수: %d' % len(sequences))

 

print(sequences)

 

print(max(len(I) for I in sequences))

 

from keras.preprocessing.sequence import pad_sequences

sequences = pad_sequences(sequences, maxlen=6, padding='pre')

 

import numpy as np

sequences = np.array(sequences)

 

X = sequences[:,:-1]

y = sequences[:,-1]

 

print(X)

 

print(y)

 

from keras.utils import to_categorical

y = to_categorical(y, num_classes=vocab_size)

 

print(y)

 

from keras.layers import Embedding, Dense, SimpleRNN

from keras.models import Sequential

 

model = Sequential()

 

model.add(Embedding(vocab_size, 10, input_length=5))

model.add(SimpleRNN(32))

model.add(Dense(vocab_size, activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(X, y, epochs=200, verbose=2)

 

def sentence_generation(modeltcurrent_wordn):

    init_word = current_word

    sentence = ''

    

    for _ in range(n):

        encoded = t.texts_to_sequences([current_word])[0]

        encoded = pad_sequences([encoded], maxlen=5, padding='pre')

        result = mpdel.predict_classes(encoded, verbose=0)

        

        for word, index in t.word_index.items():

            if index == result:

                break

        

        current_word = current_word + ' ' + word

        sentence = sentence + ' ' + word

        

    sentence = init_word + sentence

    return sentence

 

print(sentence_gwneration(model, t, '경마장에'4))

 

print(sentence_generation(model, t, '그의'2))

 

model_json = model.to_json()

with open("redwin.json""r"as json_file:

    loaded_model_json = json_file.read()

    model = model_from_json(loaded_model_json)

 

 

model.evaluate(test_X, test_y) # compile 해야 사용할 수 있음

 

 

 

6. LSTM

 

 

RNN, 전거만 기억하고 있음

LSTM, 이전것을 기억하고 있음

 

 

 

 

 

np.uint8

 

PIL

opencv-python

 

N차원 배열 다루기

데이터프레임

데이터 시각화

웹데이터 수집

 

 

 

 

 

 

 

 

 

 

'푸닥거리' 카테고리의 다른 글

딥러닝을 활용한 자연어 처리  (1) 2020.07.04
가용성 다단계 웹 테스트 등록  (0) 2020.07.03
hash-based message authentication code  (0) 2020.02.24
Trackbacks 0 : Comments 1
  1. tariat 2020.07.06 08:06 신고 Modify/Delete Reply

    블로그 잘 보고 갑니다. 오늘도 즐거운 하루 되세요.

Write a comment


가용성 다단계 웹 테스트 등록

푸닥거리 2020. 7. 3. 09:01

https://github.com/uglide/azure-content/blob/master/articles/application-insights/app-insights-monitor-web-app-availability.md#multi-step-web-tests

 

uglide/azure-content

Repository containing the Articles on azure.microsoft.com Documentation Center - uglide/azure-content

github.com

Multi-step web tests

You can monitor a scenario that involves a sequence of URLs. For example, if you are monitoring a sales website, you can test that adding items to the shopping cart works correctly.

To create a multi-step test, you record the scenario by using Visual Studio, and then upload the recording to Application Insights. Application Insights will replay the scenario at intervals and verify the responses.

Note that you can't use coded functions in your tests: the scenario steps must be contained as a script in the .webtest file.

1. Record a scenario

Use Visual Studio Enterprise or Ultimate to record a web session.

  1. Create a web performance test project.

  2. Open the .webtest file and start recording.

  3. Do the user actions you want to simulate in your test: open your website, add a product to the cart, and so on. Then stop your test.

    Don't make a long scenario. There's a limit of 100 steps and 2 minutes.

  4. Edit the test to:

  • Add validations to check the received text and response codes.

  • Remove any superfluous interactions. You could also remove dependent requests for pictures or to ad or tracking sites.

    Remember that you can only edit the test script - you can't add custom code or call other web tests. Don't insert loops in the test. You can use standard web test plug-ins.

  1. Run the test in Visual Studio to make sure it works.

    The web test runner opens a web browser and repeats the actions you recorded. Make sure it works as you expect.

2. Upload the web test to Application Insights

  1. In the Application Insights portal, create a new web test.

  2. Select multi-step test, and upload the .webtest file.

     

    Set the test locations, frequency, and alert parameters in the same way as for ping tests.

View your test results and any failures in the same way as for single-url tests.

A common reason for failure is that the test runs too long. It mustn't run longer than two minutes.

Don't forget that all the resources of a page must load correctly for the test to succeed, including scripts, style sheets, images and so forth.

Note that the web test must be entirely contained in the .webtest file: you can't use coded functions in the test.

Plugging time and random numbers into your multi-step test

Suppose you're testing a tool that gets time-dependent data such as stocks from an external feed. When you record your web test, you have to use specific times, but you set them as parameters of the test, StartTime and EndTime.

When you run the test, you'd like EndTime always to be the present time, and StartTime should be 15 minutes ago.

Web Test Plug-ins provide the way to do this.

  1. Add a web test plug-in for each variable parameter value you want. In the web test toolbar, choose Add Web Test Plugin.

    In this example, we'll use two instances of the Date Time Plug-in. One instance is for "15 minutes ago" and another for "now".

  2. Open the properties of each plug-in. Give it a name and set it to use the current time. For one of them, set Add Minutes = -15.

  3. In the web test parameters, use {{plug-in name}} to reference a plug-in name.

Now, upload your test to the portal. It will use the dynamic values on every run of the test.

Dealing with sign-in

If your users sign in to your app, you have a number of options for simulating sign-in so that you can test pages behind the sign-in. The approach you use depends on the type of security provided by the app.

In all cases, you should create an account just for the purpose of testing. If possible, restrict its permissions so that it's read-only.

  • Simple username and password: Just record a web test in the usual way. Delete cookies first.
  • SAML authentication. For this, you can use the SAML plugin that is available for web tests.
  • Client secret: If your app has a sign-in route that involves a client secret, use that. Azure Active Directory provides this.
  • Open Authentication - for example, signing in with your Microsoft or Google account. Many apps that use OAuth provide the client secret alternative, so the first tactic is to investigate that. If your test has to sign in using OAuth, the general approach is:
  • Use a tool such as Fiddler to examine the traffic between your web browser, the authentication site, and your app.
  • Perform two or more sign-ins using different machines or browsers, or at long intervals (to allow tokens to expire).
  • By comparing different sessions, identify the token passed back from the authenticating site, that is then passed to your app server after sign-in.
  • Record a web test using Visual Studio.
  • Parameterize the tokens, setting the parameter when the token is returned from the authenticator, and using it in the query to the site. (Visual Studio will attempt to parameterize the test, but will not correctly parameterize the tokens.)

Edit or disable a test

Open an individual test to edit or disable it.

You might want to disable web tests while you are performing maintenance on your service.

Performance tests

You can run a load test on your website. Like the availability test, you can send either simple requests or multi-step requests from our points around the world. Unlike an availability test, many requests are sent, simulating multiple simultaneous users.

From the Overview blade, open Settings, Performance Tests. When you create a test, you are invited to connect to or create a Visual Studio Team Services account.

When the test is complete, you'll be shown response times and success rates.

Automation

'푸닥거리' 카테고리의 다른 글

딥러닝을 활용한 자연어 처리  (1) 2020.07.04
가용성 다단계 웹 테스트 등록  (0) 2020.07.03
hash-based message authentication code  (0) 2020.02.24
Trackbacks 0 : Comments 0

Write a comment