当前位置:   article > 正文

Java后端开发工程师的必备知识

Java后端开发工程师的必备知识

Java的IO是什么意思?

IO就是输入和输出,文件的写入和读取。

用Java实现文件读取:(传统写法,之后被NIO代替)

使用BufferedReaderFileReader读取文件内容:

  1. import java.io.BufferedReader;
  2. import java.io.FileReader;
  3. import java.io.IOException;
  4. public class ReadFileExample {
  5. public static void main(String[] args) {
  6. try (BufferedReader br = new BufferedReader(new FileReader("example.txt"))) {
  7. String line;
  8. while ((line = br.readLine()) != null) {
  9. System.out.println(line);
  10. }
  11. } catch (IOException e) {
  12. e.printStackTrace();
  13. }
  14. }
  15. }

用Java实现文件写入:

使用BufferedWriterFileWriter写入文件内容:

  1. import java.io.BufferedWriter;
  2. import java.io.FileWriter;
  3. import java.io.IOException;
  4. public class WriteFileExample {
  5. public static void main(String[] args) {
  6. try (BufferedWriter bw = new BufferedWriter(new FileWriter("output.txt"))) {
  7. bw.write("Hello, world!");
  8. } catch (IOException e) {
  9. e.printStackTrace();
  10. }
  11. }
  12. }

Java中的多线程实现---extends Thread

  1. public class MyThread extends Thread {
  2. @Override
  3. public void run() {
  4. System.out.println("Thread is running: " + Thread.currentThread().getName());
  5. }
  6. public static void main(String[] args) {
  7. MyThread thread1 = new MyThread();
  8. MyThread thread2 = new MyThread();
  9. thread1.start(); // 启动线程
  10. thread2.start(); // 启动另一个线程
  11. }
  12. }

Java中的多线程实现--- new Thread + implements Runnable

  1. public class MyRunnable implements Runnable {
  2. @Override
  3. public void run() {
  4. System.out.println("Runnable is running: " + Thread.currentThread().getName());
  5. }
  6. public static void main(String[] args) {
  7. Thread thread1 = new Thread(new MyRunnable());
  8. Thread thread2 = new Thread(new MyRunnable());
  9. thread1.start(); // 启动线程
  10. thread2.start(); // 启动另一个线程
  11. }
  12. }

Java中的集合框架类:(队列,集合,键值对)

  1. Collection
  2. ├── List
  3. │ ├── ArrayList
  4. │ ├── LinkedList
  5. │ ├── Vector
  6. │ └── Stack
  7. ├── Set
  8. │ ├── HashSet
  9. │ ├── LinkedHashSet
  10. │ └── TreeSet
  11. └── Queue
  12. ├── LinkedList
  13. └── PriorityQueue
  14. Map
  15. ├── HashMap
  16. ├── LinkedHashMap
  17. ├── TreeMap
  18. ├── Hashtable
  19. └── ConcurrentHashMap
  20. Deque
  21. ├── ArrayDeque
  22. └── LinkedList

JVM负责编译代码吗?

JVM不负责编译Java代码,而是JDK负责将Java代码编译为字节码。

JVM运行字节码

Java中的运行时数据区  (不同区域的变量)

方法区 -----静态变量 常量

堆区----实例变量

栈区----局部变量

  1. public class TestClass {
  2. private static int staticVar = 0; // 静态变量,存储在方法区
  3. private int instanceVar; // 实例变量,存储在堆区
  4. public static void main(String[] args) {
  5. TestClass obj1 = new TestClass();
  6. TestClass obj2 = new TestClass();
  7. obj1.instanceVar = 1;
  8. obj2.instanceVar = 2;
  9. TestClass.staticVar = 3;
  10. System.out.println(obj1.instanceVar); // 输出 1
  11. System.out.println(obj2.instanceVar); // 输出 2
  12. System.out.println(TestClass.staticVar); // 输出 3
  13. }
  14. }

局部变量----栈区:

  1. public class StackExample {
  2. public static void main(String[] args) {
  3. int x = 5; // 局部变量,存储在栈区
  4. int result = factorial(x); // 方法调用,factorial 方法的栈帧被创建
  5. System.out.println("Factorial of " + x + " is " + result);
  6. }
  7. public static int factorial(int n) {
  8. if (n == 1) {
  9. return 1; // 基本情况,栈帧开始销毁
  10. } else {
  11. return n * factorial(n - 1); // 递归调用,新的栈帧被创建
  12. }
  13. }
  14. }

Java NIO框架

Java NIO(New Input/Output)  新输入输出库

替代传统的I/O操作

使用Java NIO读取文件:

  1. import java.io.IOException;
  2. import java.nio.ByteBuffer;
  3. import java.nio.channels.FileChannel;
  4. import java.nio.file.Path;
  5. import java.nio.file.Paths;
  6. import java.nio.file.StandardOpenOption;
  7. public class NIOExample {
  8. public static void main(String[] args) {
  9. // 定义要读取的文件路径
  10. Path path = Paths.get("example.txt");
  11. // 使用try-with-resources语句确保FileChannel在操作完成后自动关闭
  12. try (FileChannel fileChannel = FileChannel.open(path, StandardOpenOption.READ)) {
  13. // 分配一个大小为1024字节的缓冲区
  14. ByteBuffer buffer = ByteBuffer.allocate(1024);
  15. // 从通道中读取数据到缓冲区
  16. int bytesRead = fileChannel.read(buffer);
  17. // 当没有更多字节可读时,read方法返回-1
  18. while (bytesRead != -1) {
  19. // 切换缓冲区为读模式
  20. buffer.flip();
  21. // 循环读取缓冲区中的数据,直到没有剩余的字节
  22. while (buffer.hasRemaining()) {
  23. // 每次读取一个字节,并转换为字符输出
  24. System.out.print((char) buffer.get());
  25. }
  26. // 清空缓冲区,准备下一次读取
  27. buffer.clear();
  28. // 继续从通道中读取数据到缓冲区
  29. bytesRead = fileChannel.read(buffer);
  30. }
  31. } catch (IOException e) {
  32. // 捕获并处理任何I/O异常
  33. e.printStackTrace();
  34. }
  35. }
  36. }

如何理解Netty?

基于NIO的框架

Netty可以自己建设服务器8080,而不用使用tomcat

如何使用Netty创建一个服务器:

  1. import io.netty.bootstrap.ServerBootstrap;
  2. import io.netty.channel.ChannelFuture;
  3. import io.netty.channel.ChannelInitializer;
  4. import io.netty.channel.ChannelPipeline;
  5. import io.netty.channel.EventLoopGroup;
  6. import io.netty.channel.nio.NioEventLoopGroup;
  7. import io.netty.channel.socket.SocketChannel;
  8. import io.netty.channel.socket.nio.NioServerSocketChannel;
  9. import io.netty.handler.codec.string.StringDecoder;
  10. import io.netty.handler.codec.string.StringEncoder;
  11. public class EchoServer {
  12. private final int port;
  13. // 构造函数,初始化服务器端口
  14. public EchoServer(int port) {
  15. this.port = port;
  16. }
  17. // 启动服务器方法
  18. public void start() throws InterruptedException {
  19. // 用于接收客户端连接的线程组
  20. EventLoopGroup bossGroup = new NioEventLoopGroup();
  21. // 用于处理客户端I/O操作的线程组
  22. EventLoopGroup workerGroup = new NioEventLoopGroup();
  23. try {
  24. // 创建并配置ServerBootstrap
  25. ServerBootstrap bootstrap = new ServerBootstrap();
  26. bootstrap.group(bossGroup, workerGroup)
  27. .channel(NioServerSocketChannel.class) // 指定使用NIO传输Channel
  28. .childHandler(new ChannelInitializer<SocketChannel>() { // 配置子Channel
  29. @Override
  30. protected void initChannel(SocketChannel socketChannel) {
  31. // 获取ChannelPipeline
  32. ChannelPipeline pipeline = socketChannel.pipeline();
  33. // 添加String解码器
  34. pipeline.addLast(new StringDecoder());
  35. // 添加String编码器
  36. pipeline.addLast(new StringEncoder());
  37. // 添加自定义的业务逻辑处理器
  38. pipeline.addLast(new EchoServerHandler());
  39. }
  40. });
  41. // 绑定服务器端口并启动
  42. ChannelFuture future = bootstrap.bind(port).sync();
  43. // 等待服务器套接字关闭
  44. future.channel().closeFuture().sync();
  45. } finally {
  46. // 优雅关闭线程组
  47. bossGroup.shutdownGracefully();
  48. workerGroup.shutdownGracefully();
  49. }
  50. }
  51. public static void main(String[] args) throws InterruptedException {
  52. // 创建并启动Echo服务器
  53. new EchoServer(8080).start();
  54. }
  55. }
  56. // 自定义的业务逻辑处理器
  57. import io.netty.channel.ChannelHandlerContext;
  58. import io.netty.channel.SimpleChannelInboundHandler;
  59. public class EchoServerHandler extends SimpleChannelInboundHandler<String> {
  60. @Override
  61. protected void channelRead0(ChannelHandlerContext ctx, String msg) {
  62. // 打印接收到的消息
  63. System.out.println("Received: " + msg);
  64. // 将接收到的消息回写给客户端
  65. ctx.writeAndFlush(msg);
  66. }
  67. @Override
  68. public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
  69. // 打印异常栈信息
  70. cause.printStackTrace();
  71. // 关闭Channel
  72. ctx.close();
  73. }
  74. }

Java 的 web开发框架都有哪些?

spring boot

spring MVC

Netty属于web开发框架吗?

不属于,netty属于网络通信框架

如何理解网络通信框架和web开发框架的区别和使用场景?

网络通信框架可以基于TCP UDP的数据传输。

web开发框架是基于网页的HTTP的数据传输。

使用Netty的网络通信示例:用TCP传送JSON数据:

服务端:

  1. import io.netty.bootstrap.ServerBootstrap;
  2. import io.netty.channel.ChannelFuture;
  3. import io.netty.channel.ChannelInitializer;
  4. import io.netty.channel.ChannelPipeline;
  5. import io.netty.channel.EventLoopGroup;
  6. import io.netty.channel.nio.NioEventLoopGroup;
  7. import io.netty.channel.socket.SocketChannel;
  8. import io.netty.channel.socket.nio.NioServerSocketChannel;
  9. import io.netty.handler.codec.json.JsonObjectDecoder;
  10. import io.netty.handler.codec.string.StringEncoder;
  11. public class JsonServer {
  12. private final int port;
  13. public JsonServer(int port) {
  14. this.port = port;
  15. }
  16. public void start() throws InterruptedException {
  17. // 创建两个事件循环组,用于处理连接和数据传输
  18. EventLoopGroup bossGroup = new NioEventLoopGroup();
  19. EventLoopGroup workerGroup = new NioEventLoopGroup();
  20. try {
  21. ServerBootstrap b = new ServerBootstrap();
  22. b.group(bossGroup, workerGroup)
  23. // 指定使用NIO的ServerSocketChannel来处理传入的连接请求
  24. .channel(NioServerSocketChannel.class)
  25. .childHandler(new ChannelInitializer<SocketChannel>() {
  26. @Override
  27. public void initChannel(SocketChannel ch) {
  28. ChannelPipeline p = ch.pipeline();
  29. // 添加JSON解码器,将接收到的字节流转换为JSON对象
  30. p.addLast(new JsonObjectDecoder());
  31. // 添加字符串编码器,用于向客户端发送数据
  32. p.addLast(new StringEncoder());
  33. // 添加自定义处理器,用于处理接收到的JSON数据
  34. p.addLast(new JsonServerHandler());
  35. }
  36. });
  37. // 绑定端口并启动服务器
  38. ChannelFuture f = b.bind(port).sync();
  39. f.channel().closeFuture().sync();
  40. } finally {
  41. // 优雅关闭事件循环组
  42. bossGroup.shutdownGracefully();
  43. workerGroup.shutdownGracefully();
  44. }
  45. }
  46. public static void main(String[] args) throws InterruptedException {
  47. new JsonServer(8080).start();
  48. }
  49. }

客户端:

  1. import io.netty.bootstrap.Bootstrap;
  2. import io.netty.channel.ChannelFuture;
  3. import io.netty.channel.ChannelInitializer;
  4. import io.netty.channel.ChannelPipeline;
  5. import io.netty.channel.EventLoopGroup;
  6. import io.netty.channel.nio.NioEventLoopGroup;
  7. import io.netty.channel.socket.SocketChannel;
  8. import io.netty.channel.socket.nio.NioSocketChannel;
  9. import io.netty.handler.codec.json.JsonObjectEncoder;
  10. import io.netty.handler.codec.string.StringDecoder;
  11. public class JsonClient {
  12. private final String host;
  13. private final int port;
  14. public JsonClient(String host, int port) {
  15. this.host = host;
  16. this.port = port;
  17. }
  18. public void start() throws InterruptedException {
  19. EventLoopGroup group = new NioEventLoopGroup();
  20. try {
  21. Bootstrap b = new Bootstrap();
  22. b.group(group)
  23. // 指定使用NIO的SocketChannel来进行连接
  24. .channel(NioSocketChannel.class)
  25. .handler(new ChannelInitializer<SocketChannel>() {
  26. @Override
  27. public void initChannel(SocketChannel ch) {
  28. ChannelPipeline p = ch.pipeline();
  29. // 添加JSON编码器,用于向服务端发送JSON数据
  30. p.addLast(new JsonObjectEncoder());
  31. // 添加字符串解码器,将服务端返回的数据转换为字符串
  32. p.addLast(new StringDecoder());
  33. // 添加自定义处理器,用于向服务端发送JSON数据
  34. p.addLast(new JsonClientHandler());
  35. }
  36. });
  37. // 连接到服务器并启动客户端
  38. ChannelFuture f = b.connect(host, port).sync();
  39. f.channel().closeFuture().sync();
  40. } finally {
  41. // 优雅关闭事件循环组
  42. group.shutdownGracefully();
  43. }
  44. }
  45. public static void main(String[] args) throws InterruptedException {
  46. new JsonClient("localhost", 8080).start();
  47. }
  48. }

如何使用ES?

下载ES,

在命令行中进入Elasticsearch的bin目录,运行elasticsearch.bat

如何存放ES数据?

  1. import org.apache.http.HttpHost;
  2. import org.elasticsearch.action.index.IndexRequest;
  3. import org.elasticsearch.action.index.IndexResponse;
  4. import org.elasticsearch.client.RequestOptions;
  5. import org.elasticsearch.client.RestClient;
  6. import org.elasticsearch.client.RestHighLevelClient;
  7. import org.elasticsearch.common.xcontent.XContentType;
  8. import java.io.IOException;
  9. public class ElasticsearchExample {
  10. public static void main(String[] args) {
  11. // 创建Elasticsearch客户端
  12. RestHighLevelClient client = new RestHighLevelClient(
  13. RestClient.builder(new HttpHost("localhost", 9200, "http")));
  14. // 准备要索引的JSON文档
  15. String jsonString = "{" +
  16. "\"user\":\"kimchy\"," +
  17. "\"postDate\":\"2024-06-10\"," +
  18. "\"message\":\"trying out Elasticsearch\"" +
  19. "}";
  20. // 创建索引请求,将文档存储到索引"posts"中,ID为1
  21. IndexRequest indexRequest = new IndexRequest("posts")
  22. .id("1") // 可以省略ID,让Elasticsearch自动生成
  23. .source(jsonString, XContentType.JSON);
  24. try {
  25. // 执行索引请求
  26. IndexResponse indexResponse = client.index(indexRequest, RequestOptions.DEFAULT);
  27. System.out.println("Indexed document with id: " + indexResponse.getId());
  28. } catch (IOException e) {
  29. e.printStackTrace();
  30. } finally {
  31. try {
  32. // 关闭客户端
  33. client.close();
  34. } catch (IOException e) {
  35. e.printStackTrace();
  36. }
  37. }
  38. }
  39. }

使用Elasticsearch进行数据索引和查询的简单示例

  1. import org.elasticsearch.action.index.IndexRequest;
  2. import org.elasticsearch.action.index.IndexResponse;
  3. import org.elasticsearch.action.search.SearchRequest;
  4. import org.elasticsearch.action.search.SearchResponse;
  5. import org.elasticsearch.client.RequestOptions;
  6. import org.elasticsearch.client.RestHighLevelClient;
  7. import org.elasticsearch.client.RestClient;
  8. import org.elasticsearch.common.xcontent.XContentType;
  9. import org.elasticsearch.index.query.QueryBuilders;
  10. import org.elasticsearch.search.builder.SearchSourceBuilder;
  11. import java.io.IOException;
  12. public class ElasticsearchExample {
  13. public static void main(String[] args) {
  14. // 创建Elasticsearch客户端
  15. RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(
  16. new HttpHost("localhost", 9200, "http")));
  17. // 索引一个文档
  18. IndexRequest indexRequest = new IndexRequest("my_index")
  19. .id("1")
  20. .source("{\"field\":\"value\"}", XContentType.JSON);
  21. try {
  22. IndexResponse indexResponse = client.index(indexRequest, RequestOptions.DEFAULT);
  23. System.out.println("Indexed document with id: " + indexResponse.getId());
  24. // 查询文档
  25. SearchRequest searchRequest = new SearchRequest("my_index");
  26. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
  27. searchSourceBuilder.query(QueryBuilders.matchQuery("field", "value"));
  28. searchRequest.source(searchSourceBuilder);
  29. SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
  30. System.out.println("Search results: " + searchResponse.getHits().getHits().length);
  31. } catch (IOException e) {
  32. e.printStackTrace();
  33. } finally {
  34. // 关闭客户端
  35. try {
  36. client.close();
  37. } catch (IOException e) {
  38. e.printStackTrace();
  39. }
  40. }
  41. }
  42. }

Java连接MongoDb:

  1. import com.mongodb.client.FindIterable;
  2. import com.mongodb.client.MongoClient;
  3. import com.mongodb.client.MongoClients;
  4. import com.mongodb.client.MongoCollection;
  5. import com.mongodb.client.MongoDatabase;
  6. import org.bson.Document;
  7. import org.bson.types.ObjectId;
  8. public class MongoDBExample {
  9. public static void main(String[] args) {
  10. // 创建一个新的MongoDB客户端并连接到MongoDB服务器
  11. MongoClient mongoClient = MongoClients.create("mongodb://localhost:27017");
  12. // 连接到数据库,如果数据库不存在会自动创建
  13. MongoDatabase database = mongoClient.getDatabase("testdb");
  14. // 获取集合,如果集合不存在会自动创建
  15. MongoCollection<Document> collection = database.getCollection("testCollection");
  16. // 创建一个文档
  17. Document doc = new Document("_id", new ObjectId())
  18. .append("name", "John Doe")
  19. .append("age", 29)
  20. .append("address", new Document("street", "123 Main St")
  21. .append("city", "Anytown")
  22. .append("state", "CA")
  23. .append("zip", "12345"));
  24. // 插入文档到集合
  25. collection.insertOne(doc);
  26. System.out.println("Document inserted successfully");
  27. // 读取文档
  28. FindIterable<Document> documents = collection.find();
  29. for (Document document : documents) {
  30. System.out.println("Retrieved document: " + document.toJson());
  31. }
  32. // 更新文档
  33. Document query = new Document("name", "John Doe");
  34. Document update = new Document("$set", new Document("age", 30));
  35. collection.updateOne(query, update);
  36. System.out.println("Document updated successfully");
  37. // 删除文档
  38. collection.deleteOne(query);
  39. System.out.println("Document deleted successfully");
  40. // 关闭MongoDB客户端
  41. mongoClient.close();
  42. }
  43. }

mongoDb的数据类型;

字符串 数组 JSON数据文档 等等

介绍一下Hadoop:

Hadoop是一个用于存储和处理大规模数据集的开源分布式框架。它的主要组件包括HDFS(Hadoop Distributed File System)和MapReduce。

介绍一下Docker:

虚拟容器

Docker是一个开源的平台,用于开发、交付和运行应用程序。它通过容器化技术提供了一种轻量级的虚拟化方式。

redis集群:

服务端:

redis-server redis-7000.conf
redis-server redis-7001.conf
redis-server redis-7002.conf
redis-server redis-7003.conf
redis-server redis-7004.conf
redis-server redis-7005.conf
客户端:

redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 --cluster-replicas 1
 

分布式事务处理方案:

spring boot+KafKa 实现消息的一致性

生产者(发送事务消息)
  1. @SpringBootApplication
  2. public class ProducerApplication {
  3. public static void main(String[] args) {
  4. SpringApplication.run(ProducerApplication.class, args);
  5. }
  6. @Bean
  7. public KafkaTemplate<String, String> kafkaTemplate() {
  8. return new KafkaTemplate<>(producerFactory());
  9. }
  10. @Bean
  11. public ProducerFactory<String, String> producerFactory() {
  12. Map<String, Object> configProps = new HashMap<>();
  13. configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
  14. configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
  15. configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
  16. return new DefaultKafkaProducerFactory<>(configProps);
  17. }
  18. }
  19. @RestController
  20. public class MessageController {
  21. @Autowired
  22. private KafkaTemplate<String, String> kafkaTemplate;
  23. @PostMapping("/send")
  24. public ResponseEntity<String> sendMessage(@RequestParam String message) {
  25. kafkaTemplate.send("myTopic", message);
  26. return ResponseEntity.ok("Message sent");
  27. }
  28. }
消费者(处理事务消息)
  1. @SpringBootApplication
  2. public class ConsumerApplication {
  3. public static void main(String[] args) {
  4. SpringApplication.run(ConsumerApplication.class, args);
  5. }
  6. @Bean
  7. public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
  8. ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
  9. factory.setConsumerFactory(consumerFactory());
  10. return factory;
  11. }
  12. @Bean
  13. public ConsumerFactory<String, String> consumerFactory() {
  14. Map<String, Object> configProps = new HashMap<>();
  15. configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
  16. configProps.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
  17. configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
  18. configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
  19. return new DefaultKafkaConsumerFactory<>(configProps);
  20. }
  21. }
  22. @Component
  23. public class MessageListener {
  24. @KafkaListener(topics = "myTopic", groupId = "group_id")
  25. public void listen(String message) {
  26. System.out.println("Received message: " + message);
  27. // 处理业务逻辑,如插入数据库
  28. }
  29. }

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/天景科技苑/article/detail/892889
推荐阅读
相关标签
  

闽ICP备14008679号